2025-08-29 18:39:34.255529 | Job console starting 2025-08-29 18:39:34.269662 | Updating git repos 2025-08-29 18:39:34.329944 | Cloning repos into workspace 2025-08-29 18:39:34.551350 | Restoring repo states 2025-08-29 18:39:34.577437 | Merging changes 2025-08-29 18:39:34.577459 | Checking out repos 2025-08-29 18:39:34.829614 | Preparing playbooks 2025-08-29 18:39:35.477807 | Running Ansible setup 2025-08-29 18:39:39.577579 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-08-29 18:39:40.324298 | 2025-08-29 18:39:40.324463 | PLAY [Base pre] 2025-08-29 18:39:40.341358 | 2025-08-29 18:39:40.341485 | TASK [Setup log path fact] 2025-08-29 18:39:40.371686 | orchestrator | ok 2025-08-29 18:39:40.389164 | 2025-08-29 18:39:40.389301 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-08-29 18:39:40.430494 | orchestrator | ok 2025-08-29 18:39:40.443669 | 2025-08-29 18:39:40.443801 | TASK [emit-job-header : Print job information] 2025-08-29 18:39:40.486021 | # Job Information 2025-08-29 18:39:40.486201 | Ansible Version: 2.16.14 2025-08-29 18:39:40.486236 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-08-29 18:39:40.486269 | Pipeline: post 2025-08-29 18:39:40.486292 | Executor: 521e9411259a 2025-08-29 18:39:40.486313 | Triggered by: https://github.com/osism/testbed/commit/d30cb1221507669d9907108aef5a42de4a852f42 2025-08-29 18:39:40.486334 | Event ID: 7cec9ef6-8507-11f0-82dc-a297a9c05e95 2025-08-29 18:39:40.493256 | 2025-08-29 18:39:40.493377 | LOOP [emit-job-header : Print node information] 2025-08-29 18:39:40.638627 | orchestrator | ok: 2025-08-29 18:39:40.638958 | orchestrator | # Node Information 2025-08-29 18:39:40.639019 | orchestrator | Inventory Hostname: orchestrator 2025-08-29 18:39:40.639061 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-08-29 18:39:40.639097 | orchestrator | Username: zuul-testbed05 2025-08-29 18:39:40.639131 | orchestrator | Distro: Debian 12.11 2025-08-29 18:39:40.639169 | orchestrator | Provider: static-testbed 2025-08-29 18:39:40.639204 | orchestrator | Region: 2025-08-29 18:39:40.639238 | orchestrator | Label: testbed-orchestrator 2025-08-29 18:39:40.639272 | orchestrator | Product Name: OpenStack Nova 2025-08-29 18:39:40.639303 | orchestrator | Interface IP: 81.163.193.140 2025-08-29 18:39:40.653808 | 2025-08-29 18:39:40.653945 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-08-29 18:39:41.124034 | orchestrator -> localhost | changed 2025-08-29 18:39:41.134625 | 2025-08-29 18:39:41.134795 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-08-29 18:39:42.221667 | orchestrator -> localhost | changed 2025-08-29 18:39:42.237475 | 2025-08-29 18:39:42.237595 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-08-29 18:39:42.528898 | orchestrator -> localhost | ok 2025-08-29 18:39:42.537620 | 2025-08-29 18:39:42.537792 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-08-29 18:39:42.585003 | orchestrator | ok 2025-08-29 18:39:42.607515 | orchestrator | included: /var/lib/zuul/builds/3b59bc79e5d64b9988697df210f773f3/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-08-29 18:39:42.616713 | 2025-08-29 18:39:42.616858 | TASK [add-build-sshkey : Create Temp SSH key] 2025-08-29 18:39:43.776463 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-08-29 18:39:43.776786 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/3b59bc79e5d64b9988697df210f773f3/work/3b59bc79e5d64b9988697df210f773f3_id_rsa 2025-08-29 18:39:43.776832 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/3b59bc79e5d64b9988697df210f773f3/work/3b59bc79e5d64b9988697df210f773f3_id_rsa.pub 2025-08-29 18:39:43.776859 | orchestrator -> localhost | The key fingerprint is: 2025-08-29 18:39:43.776883 | orchestrator -> localhost | SHA256:C25CDebDqHDIlnthtcn7LkhF+N9bvEssVMNGy3Y1GbY zuul-build-sshkey 2025-08-29 18:39:43.776907 | orchestrator -> localhost | The key's randomart image is: 2025-08-29 18:39:43.776945 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-08-29 18:39:43.776967 | orchestrator -> localhost | | . . =o| 2025-08-29 18:39:43.776988 | orchestrator -> localhost | | . . + . o.o| 2025-08-29 18:39:43.777009 | orchestrator -> localhost | | =. O . E | 2025-08-29 18:39:43.777029 | orchestrator -> localhost | |...=o=o + o | 2025-08-29 18:39:43.777048 | orchestrator -> localhost | |o+oo*++ S.. | 2025-08-29 18:39:43.777074 | orchestrator -> localhost | |oooo.o.o.o.o | 2025-08-29 18:39:43.777094 | orchestrator -> localhost | |....o.o ..oo. | 2025-08-29 18:39:43.777114 | orchestrator -> localhost | | .. +. .o. | 2025-08-29 18:39:43.777134 | orchestrator -> localhost | | oo .. | 2025-08-29 18:39:43.777154 | orchestrator -> localhost | +----[SHA256]-----+ 2025-08-29 18:39:43.777211 | orchestrator -> localhost | ok: Runtime: 0:00:00.621475 2025-08-29 18:39:43.785517 | 2025-08-29 18:39:43.785626 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-08-29 18:39:43.821671 | orchestrator | ok 2025-08-29 18:39:43.834983 | orchestrator | included: /var/lib/zuul/builds/3b59bc79e5d64b9988697df210f773f3/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-08-29 18:39:43.846627 | 2025-08-29 18:39:43.846743 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-08-29 18:39:43.871308 | orchestrator | skipping: Conditional result was False 2025-08-29 18:39:43.882324 | 2025-08-29 18:39:43.882439 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-08-29 18:39:44.499628 | orchestrator | changed 2025-08-29 18:39:44.506165 | 2025-08-29 18:39:44.506269 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-08-29 18:39:44.776998 | orchestrator | ok 2025-08-29 18:39:44.786859 | 2025-08-29 18:39:44.786996 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-08-29 18:39:45.192966 | orchestrator | ok 2025-08-29 18:39:45.201878 | 2025-08-29 18:39:45.202023 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-08-29 18:39:45.610482 | orchestrator | ok 2025-08-29 18:39:45.617507 | 2025-08-29 18:39:45.617610 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-08-29 18:39:45.641901 | orchestrator | skipping: Conditional result was False 2025-08-29 18:39:45.649566 | 2025-08-29 18:39:45.649671 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-08-29 18:39:46.078271 | orchestrator -> localhost | changed 2025-08-29 18:39:46.092672 | 2025-08-29 18:39:46.092851 | TASK [add-build-sshkey : Add back temp key] 2025-08-29 18:39:46.421928 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/3b59bc79e5d64b9988697df210f773f3/work/3b59bc79e5d64b9988697df210f773f3_id_rsa (zuul-build-sshkey) 2025-08-29 18:39:46.422357 | orchestrator -> localhost | ok: Runtime: 0:00:00.018883 2025-08-29 18:39:46.435100 | 2025-08-29 18:39:46.435221 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-08-29 18:39:46.894432 | orchestrator | ok 2025-08-29 18:39:46.902211 | 2025-08-29 18:39:46.902327 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-08-29 18:39:46.937046 | orchestrator | skipping: Conditional result was False 2025-08-29 18:39:46.995480 | 2025-08-29 18:39:46.995627 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-08-29 18:39:47.409479 | orchestrator | ok 2025-08-29 18:39:47.424732 | 2025-08-29 18:39:47.424874 | TASK [validate-host : Define zuul_info_dir fact] 2025-08-29 18:39:47.454880 | orchestrator | ok 2025-08-29 18:39:47.462603 | 2025-08-29 18:39:47.462722 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-08-29 18:39:47.740949 | orchestrator -> localhost | ok 2025-08-29 18:39:47.749019 | 2025-08-29 18:39:47.749127 | TASK [validate-host : Collect information about the host] 2025-08-29 18:39:48.947479 | orchestrator | ok 2025-08-29 18:39:48.962667 | 2025-08-29 18:39:48.962796 | TASK [validate-host : Sanitize hostname] 2025-08-29 18:39:49.029302 | orchestrator | ok 2025-08-29 18:39:49.038384 | 2025-08-29 18:39:49.038525 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-08-29 18:39:49.611583 | orchestrator -> localhost | changed 2025-08-29 18:39:49.618637 | 2025-08-29 18:39:49.618787 | TASK [validate-host : Collect information about zuul worker] 2025-08-29 18:39:50.062311 | orchestrator | ok 2025-08-29 18:39:50.071336 | 2025-08-29 18:39:50.071490 | TASK [validate-host : Write out all zuul information for each host] 2025-08-29 18:39:50.633647 | orchestrator -> localhost | changed 2025-08-29 18:39:50.645369 | 2025-08-29 18:39:50.645491 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-08-29 18:39:50.935154 | orchestrator | ok 2025-08-29 18:39:50.944289 | 2025-08-29 18:39:50.944419 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-08-29 18:40:32.509610 | orchestrator | changed: 2025-08-29 18:40:32.510008 | orchestrator | .d..t...... src/ 2025-08-29 18:40:32.510050 | orchestrator | .d..t...... src/github.com/ 2025-08-29 18:40:32.510076 | orchestrator | .d..t...... src/github.com/osism/ 2025-08-29 18:40:32.510098 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-08-29 18:40:32.510119 | orchestrator | RedHat.yml 2025-08-29 18:40:32.534376 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-08-29 18:40:32.534414 | orchestrator | RedHat.yml 2025-08-29 18:40:32.534525 | orchestrator | = 2.2.0"... 2025-08-29 18:40:48.321237 | orchestrator | 18:40:48.321 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-08-29 18:40:48.355488 | orchestrator | 18:40:48.355 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-08-29 18:40:48.905445 | orchestrator | 18:40:48.905 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-08-29 18:40:49.793462 | orchestrator | 18:40:49.793 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-08-29 18:40:50.221930 | orchestrator | 18:40:50.221 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-08-29 18:40:50.917402 | orchestrator | 18:40:50.917 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-08-29 18:40:51.007844 | orchestrator | 18:40:51.007 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-08-29 18:40:51.522994 | orchestrator | 18:40:51.522 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-08-29 18:40:51.523242 | orchestrator | 18:40:51.523 STDOUT terraform: Providers are signed by their developers. 2025-08-29 18:40:51.523254 | orchestrator | 18:40:51.523 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-08-29 18:40:51.523259 | orchestrator | 18:40:51.523 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-08-29 18:40:51.523483 | orchestrator | 18:40:51.523 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-08-29 18:40:51.523495 | orchestrator | 18:40:51.523 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-08-29 18:40:51.523502 | orchestrator | 18:40:51.523 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-08-29 18:40:51.523506 | orchestrator | 18:40:51.523 STDOUT terraform: you run "tofu init" in the future. 2025-08-29 18:40:51.524040 | orchestrator | 18:40:51.523 STDOUT terraform: OpenTofu has been successfully initialized! 2025-08-29 18:40:51.524332 | orchestrator | 18:40:51.524 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-08-29 18:40:51.524340 | orchestrator | 18:40:51.524 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-08-29 18:40:51.524344 | orchestrator | 18:40:51.524 STDOUT terraform: should now work. 2025-08-29 18:40:51.524348 | orchestrator | 18:40:51.524 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-08-29 18:40:51.524352 | orchestrator | 18:40:51.524 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-08-29 18:40:51.524357 | orchestrator | 18:40:51.524 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-08-29 18:40:51.643472 | orchestrator | 18:40:51.641 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-08-29 18:40:51.643589 | orchestrator | 18:40:51.641 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-08-29 18:40:51.842929 | orchestrator | 18:40:51.842 STDOUT terraform: Created and switched to workspace "ci"! 2025-08-29 18:40:51.842990 | orchestrator | 18:40:51.842 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-08-29 18:40:51.842997 | orchestrator | 18:40:51.842 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-08-29 18:40:51.843001 | orchestrator | 18:40:51.842 STDOUT terraform: for this configuration. 2025-08-29 18:40:52.001205 | orchestrator | 18:40:52.000 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-08-29 18:40:52.001258 | orchestrator | 18:40:52.000 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-08-29 18:40:52.097335 | orchestrator | 18:40:52.097 STDOUT terraform: ci.auto.tfvars 2025-08-29 18:40:52.101299 | orchestrator | 18:40:52.101 STDOUT terraform: default_custom.tf 2025-08-29 18:40:52.237500 | orchestrator | 18:40:52.237 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-08-29 18:40:53.155201 | orchestrator | 18:40:53.155 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-08-29 18:40:53.687051 | orchestrator | 18:40:53.686 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-08-29 18:40:54.003430 | orchestrator | 18:40:54.001 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-08-29 18:40:54.006101 | orchestrator | 18:40:54.002 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-08-29 18:40:54.006122 | orchestrator | 18:40:54.002 STDOUT terraform:  + create 2025-08-29 18:40:54.006129 | orchestrator | 18:40:54.002 STDOUT terraform:  <= read (data resources) 2025-08-29 18:40:54.006136 | orchestrator | 18:40:54.002 STDOUT terraform: OpenTofu will perform the following actions: 2025-08-29 18:40:54.006140 | orchestrator | 18:40:54.002 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-08-29 18:40:54.006144 | orchestrator | 18:40:54.002 STDOUT terraform:  # (config refers to values not yet known) 2025-08-29 18:40:54.006148 | orchestrator | 18:40:54.002 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-08-29 18:40:54.006153 | orchestrator | 18:40:54.003 STDOUT terraform:  + checksum = (known after apply) 2025-08-29 18:40:54.006157 | orchestrator | 18:40:54.003 STDOUT terraform:  + created_at = (known after apply) 2025-08-29 18:40:54.006161 | orchestrator | 18:40:54.003 STDOUT terraform:  + file = (known after apply) 2025-08-29 18:40:54.006165 | orchestrator | 18:40:54.003 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.006169 | orchestrator | 18:40:54.003 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 18:40:54.006186 | orchestrator | 18:40:54.003 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-08-29 18:40:54.006190 | orchestrator | 18:40:54.003 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-08-29 18:40:54.006194 | orchestrator | 18:40:54.003 STDOUT terraform:  + most_recent = true 2025-08-29 18:40:54.006198 | orchestrator | 18:40:54.003 STDOUT terraform:  + name = (known after apply) 2025-08-29 18:40:54.006201 | orchestrator | 18:40:54.003 STDOUT terraform:  + protected = (known after apply) 2025-08-29 18:40:54.006205 | orchestrator | 18:40:54.003 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.006209 | orchestrator | 18:40:54.003 STDOUT terraform:  + schema = (known after apply) 2025-08-29 18:40:54.006214 | orchestrator | 18:40:54.003 STDOUT terraform:  + size_bytes = (known after apply) 2025-08-29 18:40:54.006217 | orchestrator | 18:40:54.003 STDOUT terraform:  + tags = (known after apply) 2025-08-29 18:40:54.006221 | orchestrator | 18:40:54.003 STDOUT terraform:  + updated_at = (known after apply) 2025-08-29 18:40:54.006225 | orchestrator | 18:40:54.003 STDOUT terraform:  } 2025-08-29 18:40:54.006232 | orchestrator | 18:40:54.003 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-08-29 18:40:54.006236 | orchestrator | 18:40:54.003 STDOUT terraform:  # (config refers to values not yet known) 2025-08-29 18:40:54.006240 | orchestrator | 18:40:54.003 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-08-29 18:40:54.006244 | orchestrator | 18:40:54.003 STDOUT terraform:  + checksum = (known after apply) 2025-08-29 18:40:54.006248 | orchestrator | 18:40:54.003 STDOUT terraform:  + created_at = (known after apply) 2025-08-29 18:40:54.006256 | orchestrator | 18:40:54.003 STDOUT terraform:  + file = (known after apply) 2025-08-29 18:40:54.006260 | orchestrator | 18:40:54.003 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.006263 | orchestrator | 18:40:54.003 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 18:40:54.006267 | orchestrator | 18:40:54.003 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-08-29 18:40:54.006271 | orchestrator | 18:40:54.003 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-08-29 18:40:54.006274 | orchestrator | 18:40:54.003 STDOUT terraform:  + most_recent = true 2025-08-29 18:40:54.006278 | orchestrator | 18:40:54.003 STDOUT terraform:  + name = (known after apply) 2025-08-29 18:40:54.006282 | orchestrator | 18:40:54.003 STDOUT terraform:  + protected = (known after apply) 2025-08-29 18:40:54.006286 | orchestrator | 18:40:54.003 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.006289 | orchestrator | 18:40:54.003 STDOUT terraform:  + schema = (known after apply) 2025-08-29 18:40:54.006293 | orchestrator | 18:40:54.003 STDOUT terraform:  + size_bytes = (known after apply) 2025-08-29 18:40:54.006303 | orchestrator | 18:40:54.003 STDOUT terraform:  + tags = (known after apply) 2025-08-29 18:40:54.006307 | orchestrator | 18:40:54.003 STDOUT terraform:  + updated_at = (known after apply) 2025-08-29 18:40:54.006311 | orchestrator | 18:40:54.003 STDOUT terraform:  } 2025-08-29 18:40:54.006315 | orchestrator | 18:40:54.003 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-08-29 18:40:54.006322 | orchestrator | 18:40:54.003 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-08-29 18:40:54.006326 | orchestrator | 18:40:54.004 STDOUT terraform:  + content = (known after apply) 2025-08-29 18:40:54.006330 | orchestrator | 18:40:54.004 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-08-29 18:40:54.006334 | orchestrator | 18:40:54.004 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-08-29 18:40:54.006338 | orchestrator | 18:40:54.004 STDOUT terraform:  + content_md5 = (known after apply) 2025-08-29 18:40:54.006341 | orchestrator | 18:40:54.004 STDOUT terraform:  + content_sha1 = (known after apply) 2025-08-29 18:40:54.006345 | orchestrator | 18:40:54.004 STDOUT terraform:  + content_sha256 = (known after apply) 2025-08-29 18:40:54.006349 | orchestrator | 18:40:54.004 STDOUT terraform:  + content_sha512 = (known after apply) 2025-08-29 18:40:54.006353 | orchestrator | 18:40:54.004 STDOUT terraform:  + directory_permission = "0777" 2025-08-29 18:40:54.006357 | orchestrator | 18:40:54.004 STDOUT terraform:  + file_permission = "0644" 2025-08-29 18:40:54.006360 | orchestrator | 18:40:54.004 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-08-29 18:40:54.006364 | orchestrator | 18:40:54.004 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.006368 | orchestrator | 18:40:54.004 STDOUT terraform:  } 2025-08-29 18:40:54.006372 | orchestrator | 18:40:54.004 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-08-29 18:40:54.006375 | orchestrator | 18:40:54.004 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-08-29 18:40:54.006379 | orchestrator | 18:40:54.004 STDOUT terraform:  + content = (known after apply) 2025-08-29 18:40:54.006383 | orchestrator | 18:40:54.004 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-08-29 18:40:54.006386 | orchestrator | 18:40:54.004 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-08-29 18:40:54.006390 | orchestrator | 18:40:54.004 STDOUT terraform:  + content_md5 = (known after apply) 2025-08-29 18:40:54.006394 | orchestrator | 18:40:54.004 STDOUT terraform:  + content_sha1 = (known after apply) 2025-08-29 18:40:54.006397 | orchestrator | 18:40:54.004 STDOUT terraform:  + content_sha256 = (known after apply) 2025-08-29 18:40:54.006404 | orchestrator | 18:40:54.004 STDOUT terraform:  + content_sha512 = (known after apply) 2025-08-29 18:40:54.006407 | orchestrator | 18:40:54.004 STDOUT terraform:  + directory_permission = "0777" 2025-08-29 18:40:54.006411 | orchestrator | 18:40:54.004 STDOUT terraform:  + file_permission = "0644" 2025-08-29 18:40:54.006415 | orchestrator | 18:40:54.004 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-08-29 18:40:54.006418 | orchestrator | 18:40:54.004 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.006422 | orchestrator | 18:40:54.004 STDOUT terraform:  } 2025-08-29 18:40:54.006426 | orchestrator | 18:40:54.004 STDOUT terraform:  # local_file.inventory will be created 2025-08-29 18:40:54.006429 | orchestrator | 18:40:54.004 STDOUT terraform:  + resource "local_file" "inventory" { 2025-08-29 18:40:54.006433 | orchestrator | 18:40:54.004 STDOUT terraform:  + content = (known after apply) 2025-08-29 18:40:54.006440 | orchestrator | 18:40:54.004 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-08-29 18:40:54.006444 | orchestrator | 18:40:54.005 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-08-29 18:40:54.006447 | orchestrator | 18:40:54.005 STDOUT terraform:  + content_md5 = (known after apply) 2025-08-29 18:40:54.006451 | orchestrator | 18:40:54.005 STDOUT terraform:  + content_sha1 = (known after apply) 2025-08-29 18:40:54.006458 | orchestrator | 18:40:54.005 STDOUT terraform:  + content_sha256 = (known after apply) 2025-08-29 18:40:54.006462 | orchestrator | 18:40:54.005 STDOUT terraform:  + content_sha512 = (known after apply) 2025-08-29 18:40:54.006465 | orchestrator | 18:40:54.005 STDOUT terraform:  + directory_permission = "0777" 2025-08-29 18:40:54.006469 | orchestrator | 18:40:54.005 STDOUT terraform:  + file_permission = "0644" 2025-08-29 18:40:54.006473 | orchestrator | 18:40:54.005 STDOUT terraform:  + filename = "inventory.ci" 2025-08-29 18:40:54.006476 | orchestrator | 18:40:54.005 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.006480 | orchestrator | 18:40:54.005 STDOUT terraform:  } 2025-08-29 18:40:54.006484 | orchestrator | 18:40:54.005 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-08-29 18:40:54.006488 | orchestrator | 18:40:54.005 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-08-29 18:40:54.006491 | orchestrator | 18:40:54.005 STDOUT terraform:  + content = (sensitive value) 2025-08-29 18:40:54.006495 | orchestrator | 18:40:54.005 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-08-29 18:40:54.006499 | orchestrator | 18:40:54.005 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-08-29 18:40:54.006503 | orchestrator | 18:40:54.005 STDOUT terraform:  + content_md5 = (known after apply) 2025-08-29 18:40:54.006507 | orchestrator | 18:40:54.005 STDOUT terraform:  + content_sha1 = (known after apply) 2025-08-29 18:40:54.006510 | orchestrator | 18:40:54.005 STDOUT terraform:  + content_sha256 = (known after apply) 2025-08-29 18:40:54.006514 | orchestrator | 18:40:54.005 STDOUT terraform:  + content_sha512 = (known after apply) 2025-08-29 18:40:54.006518 | orchestrator | 18:40:54.005 STDOUT terraform:  + directory_permission = "0700" 2025-08-29 18:40:54.006521 | orchestrator | 18:40:54.005 STDOUT terraform:  + file_permission = "0600" 2025-08-29 18:40:54.006525 | orchestrator | 18:40:54.005 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-08-29 18:40:54.006531 | orchestrator | 18:40:54.005 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.006535 | orchestrator | 18:40:54.005 STDOUT terraform:  } 2025-08-29 18:40:54.006539 | orchestrator | 18:40:54.005 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-08-29 18:40:54.006542 | orchestrator | 18:40:54.006 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-08-29 18:40:54.006547 | orchestrator | 18:40:54.006 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.006550 | orchestrator | 18:40:54.006 STDOUT terraform:  } 2025-08-29 18:40:54.006554 | orchestrator | 18:40:54.006 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-08-29 18:40:54.006562 | orchestrator | 18:40:54.006 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-08-29 18:40:54.006566 | orchestrator | 18:40:54.006 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 18:40:54.006570 | orchestrator | 18:40:54.006 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 18:40:54.006576 | orchestrator | 18:40:54.006 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.006579 | orchestrator | 18:40:54.006 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 18:40:54.006626 | orchestrator | 18:40:54.006 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 18:40:54.006738 | orchestrator | 18:40:54.006 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-08-29 18:40:54.006798 | orchestrator | 18:40:54.006 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.006856 | orchestrator | 18:40:54.006 STDOUT terraform:  + size = 80 2025-08-29 18:40:54.006869 | orchestrator | 18:40:54.006 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 18:40:54.006923 | orchestrator | 18:40:54.006 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 18:40:54.006928 | orchestrator | 18:40:54.006 STDOUT terraform:  } 2025-08-29 18:40:54.006968 | orchestrator | 18:40:54.006 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-08-29 18:40:54.007033 | orchestrator | 18:40:54.006 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 18:40:54.007091 | orchestrator | 18:40:54.007 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 18:40:54.007097 | orchestrator | 18:40:54.007 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 18:40:54.007172 | orchestrator | 18:40:54.007 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.007180 | orchestrator | 18:40:54.007 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 18:40:54.007222 | orchestrator | 18:40:54.007 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 18:40:54.007286 | orchestrator | 18:40:54.007 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-08-29 18:40:54.007297 | orchestrator | 18:40:54.007 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.008080 | orchestrator | 18:40:54.007 STDOUT terraform:  + size = 80 2025-08-29 18:40:54.008096 | orchestrator | 18:40:54.007 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 18:40:54.008100 | orchestrator | 18:40:54.007 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 18:40:54.008104 | orchestrator | 18:40:54.007 STDOUT terraform:  } 2025-08-29 18:40:54.008108 | orchestrator | 18:40:54.007 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-08-29 18:40:54.008113 | orchestrator | 18:40:54.007 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 18:40:54.008116 | orchestrator | 18:40:54.007 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 18:40:54.008127 | orchestrator | 18:40:54.007 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 18:40:54.008131 | orchestrator | 18:40:54.007 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.008134 | orchestrator | 18:40:54.007 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 18:40:54.008138 | orchestrator | 18:40:54.007 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 18:40:54.008146 | orchestrator | 18:40:54.007 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-08-29 18:40:54.008150 | orchestrator | 18:40:54.007 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.008153 | orchestrator | 18:40:54.007 STDOUT terraform:  + size = 80 2025-08-29 18:40:54.008157 | orchestrator | 18:40:54.007 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 18:40:54.008160 | orchestrator | 18:40:54.007 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 18:40:54.008164 | orchestrator | 18:40:54.007 STDOUT terraform:  } 2025-08-29 18:40:54.008168 | orchestrator | 18:40:54.007 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-08-29 18:40:54.008172 | orchestrator | 18:40:54.007 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 18:40:54.008175 | orchestrator | 18:40:54.007 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 18:40:54.008179 | orchestrator | 18:40:54.007 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 18:40:54.008182 | orchestrator | 18:40:54.008 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.008189 | orchestrator | 18:40:54.008 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 18:40:54.008193 | orchestrator | 18:40:54.008 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 18:40:54.008196 | orchestrator | 18:40:54.008 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-08-29 18:40:54.008370 | orchestrator | 18:40:54.008 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.008383 | orchestrator | 18:40:54.008 STDOUT terraform:  + size = 80 2025-08-29 18:40:54.008387 | orchestrator | 18:40:54.008 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 18:40:54.009053 | orchestrator | 18:40:54.008 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 18:40:54.009073 | orchestrator | 18:40:54.008 STDOUT terraform:  } 2025-08-29 18:40:54.009077 | orchestrator | 18:40:54.008 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-08-29 18:40:54.009082 | orchestrator | 18:40:54.008 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 18:40:54.009086 | orchestrator | 18:40:54.008 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 18:40:54.009093 | orchestrator | 18:40:54.009 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 18:40:54.009107 | orchestrator | 18:40:54.009 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.014094 | orchestrator | 18:40:54.009 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 18:40:54.014147 | orchestrator | 18:40:54.009 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 18:40:54.014162 | orchestrator | 18:40:54.009 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-08-29 18:40:54.014166 | orchestrator | 18:40:54.009 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.014171 | orchestrator | 18:40:54.009 STDOUT terraform:  + size = 80 2025-08-29 18:40:54.014175 | orchestrator | 18:40:54.009 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 18:40:54.014180 | orchestrator | 18:40:54.009 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 18:40:54.014184 | orchestrator | 18:40:54.009 STDOUT terraform:  } 2025-08-29 18:40:54.014188 | orchestrator | 18:40:54.009 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-08-29 18:40:54.014194 | orchestrator | 18:40:54.009 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 18:40:54.014198 | orchestrator | 18:40:54.009 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 18:40:54.014202 | orchestrator | 18:40:54.010 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 18:40:54.014205 | orchestrator | 18:40:54.010 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.014209 | orchestrator | 18:40:54.010 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 18:40:54.014213 | orchestrator | 18:40:54.010 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 18:40:54.014217 | orchestrator | 18:40:54.010 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-08-29 18:40:54.014221 | orchestrator | 18:40:54.010 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.014225 | orchestrator | 18:40:54.010 STDOUT terraform:  + size = 80 2025-08-29 18:40:54.014229 | orchestrator | 18:40:54.010 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 18:40:54.014233 | orchestrator | 18:40:54.010 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 18:40:54.014237 | orchestrator | 18:40:54.010 STDOUT terraform:  } 2025-08-29 18:40:54.014241 | orchestrator | 18:40:54.010 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-08-29 18:40:54.014245 | orchestrator | 18:40:54.010 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 18:40:54.014248 | orchestrator | 18:40:54.010 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 18:40:54.014252 | orchestrator | 18:40:54.010 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 18:40:54.014262 | orchestrator | 18:40:54.010 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.014266 | orchestrator | 18:40:54.010 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 18:40:54.014270 | orchestrator | 18:40:54.010 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 18:40:54.014274 | orchestrator | 18:40:54.010 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-08-29 18:40:54.014278 | orchestrator | 18:40:54.010 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.014281 | orchestrator | 18:40:54.010 STDOUT terraform:  + size = 80 2025-08-29 18:40:54.014289 | orchestrator | 18:40:54.010 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 18:40:54.014293 | orchestrator | 18:40:54.010 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 18:40:54.014296 | orchestrator | 18:40:54.010 STDOUT terraform:  } 2025-08-29 18:40:54.014301 | orchestrator | 18:40:54.010 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-08-29 18:40:54.018069 | orchestrator | 18:40:54.010 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 18:40:54.018094 | orchestrator | 18:40:54.014 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 18:40:54.018099 | orchestrator | 18:40:54.014 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 18:40:54.018103 | orchestrator | 18:40:54.014 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.018107 | orchestrator | 18:40:54.014 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 18:40:54.018123 | orchestrator | 18:40:54.014 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-08-29 18:40:54.018127 | orchestrator | 18:40:54.014 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.018131 | orchestrator | 18:40:54.014 STDOUT terraform:  + size = 20 2025-08-29 18:40:54.018135 | orchestrator | 18:40:54.014 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 18:40:54.018139 | orchestrator | 18:40:54.014 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 18:40:54.018143 | orchestrator | 18:40:54.014 STDOUT terraform:  } 2025-08-29 18:40:54.018146 | orchestrator | 18:40:54.014 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-08-29 18:40:54.018151 | orchestrator | 18:40:54.014 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 18:40:54.018155 | orchestrator | 18:40:54.014 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 18:40:54.018159 | orchestrator | 18:40:54.015 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 18:40:54.018163 | orchestrator | 18:40:54.015 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.018167 | orchestrator | 18:40:54.015 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 18:40:54.018170 | orchestrator | 18:40:54.015 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-08-29 18:40:54.018174 | orchestrator | 18:40:54.015 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.018180 | orchestrator | 18:40:54.015 STDOUT terraform:  + size = 20 2025-08-29 18:40:54.018188 | orchestrator | 18:40:54.015 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 18:40:54.018192 | orchestrator | 18:40:54.015 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 18:40:54.018196 | orchestrator | 18:40:54.015 STDOUT terraform:  } 2025-08-29 18:40:54.018200 | orchestrator | 18:40:54.015 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-08-29 18:40:54.018204 | orchestrator | 18:40:54.015 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 18:40:54.018208 | orchestrator | 18:40:54.015 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 18:40:54.018218 | orchestrator | 18:40:54.015 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 18:40:54.018222 | orchestrator | 18:40:54.015 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.018226 | orchestrator | 18:40:54.015 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 18:40:54.018229 | orchestrator | 18:40:54.015 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-08-29 18:40:54.018233 | orchestrator | 18:40:54.015 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.018237 | orchestrator | 18:40:54.015 STDOUT terraform:  + size = 20 2025-08-29 18:40:54.018240 | orchestrator | 18:40:54.015 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 18:40:54.018244 | orchestrator | 18:40:54.015 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 18:40:54.018248 | orchestrator | 18:40:54.015 STDOUT terraform:  } 2025-08-29 18:40:54.018252 | orchestrator | 18:40:54.015 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-08-29 18:40:54.018255 | orchestrator | 18:40:54.015 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 18:40:54.018266 | orchestrator | 18:40:54.015 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 18:40:54.018270 | orchestrator | 18:40:54.015 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 18:40:54.018274 | orchestrator | 18:40:54.015 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.018278 | orchestrator | 18:40:54.015 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 18:40:54.018282 | orchestrator | 18:40:54.016 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-08-29 18:40:54.018285 | orchestrator | 18:40:54.016 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.018289 | orchestrator | 18:40:54.016 STDOUT terraform:  + size = 20 2025-08-29 18:40:54.018293 | orchestrator | 18:40:54.016 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 18:40:54.018297 | orchestrator | 18:40:54.016 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 18:40:54.018300 | orchestrator | 18:40:54.016 STDOUT terraform:  } 2025-08-29 18:40:54.018304 | orchestrator | 18:40:54.016 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-08-29 18:40:54.018308 | orchestrator | 18:40:54.016 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 18:40:54.018312 | orchestrator | 18:40:54.016 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 18:40:54.018315 | orchestrator | 18:40:54.016 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 18:40:54.018319 | orchestrator | 18:40:54.016 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.018323 | orchestrator | 18:40:54.016 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 18:40:54.018329 | orchestrator | 18:40:54.016 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-08-29 18:40:54.018333 | orchestrator | 18:40:54.016 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.018340 | orchestrator | 18:40:54.016 STDOUT terraform:  + size = 20 2025-08-29 18:40:54.018344 | orchestrator | 18:40:54.016 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 18:40:54.018348 | orchestrator | 18:40:54.016 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 18:40:54.018351 | orchestrator | 18:40:54.016 STDOUT terraform:  } 2025-08-29 18:40:54.018355 | orchestrator | 18:40:54.016 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-08-29 18:40:54.018359 | orchestrator | 18:40:54.016 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 18:40:54.018363 | orchestrator | 18:40:54.016 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 18:40:54.018366 | orchestrator | 18:40:54.016 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 18:40:54.018370 | orchestrator | 18:40:54.016 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.018374 | orchestrator | 18:40:54.016 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 18:40:54.018378 | orchestrator | 18:40:54.016 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-08-29 18:40:54.018381 | orchestrator | 18:40:54.016 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.018385 | orchestrator | 18:40:54.016 STDOUT terraform:  + size = 20 2025-08-29 18:40:54.018389 | orchestrator | 18:40:54.016 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 18:40:54.018393 | orchestrator | 18:40:54.016 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 18:40:54.018396 | orchestrator | 18:40:54.017 STDOUT terraform:  } 2025-08-29 18:40:54.018400 | orchestrator | 18:40:54.017 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-08-29 18:40:54.018404 | orchestrator | 18:40:54.017 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 18:40:54.018408 | orchestrator | 18:40:54.017 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 18:40:54.018417 | orchestrator | 18:40:54.017 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 18:40:54.018421 | orchestrator | 18:40:54.017 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.018425 | orchestrator | 18:40:54.017 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 18:40:54.018428 | orchestrator | 18:40:54.017 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-08-29 18:40:54.018432 | orchestrator | 18:40:54.017 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.018436 | orchestrator | 18:40:54.017 STDOUT terraform:  + size = 20 2025-08-29 18:40:54.018440 | orchestrator | 18:40:54.017 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 18:40:54.018443 | orchestrator | 18:40:54.017 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 18:40:54.018447 | orchestrator | 18:40:54.017 STDOUT terraform:  } 2025-08-29 18:40:54.018451 | orchestrator | 18:40:54.017 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-08-29 18:40:54.018455 | orchestrator | 18:40:54.017 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 18:40:54.018461 | orchestrator | 18:40:54.017 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 18:40:54.018465 | orchestrator | 18:40:54.017 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 18:40:54.018469 | orchestrator | 18:40:54.017 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.018473 | orchestrator | 18:40:54.017 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 18:40:54.018476 | orchestrator | 18:40:54.017 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-08-29 18:40:54.018480 | orchestrator | 18:40:54.017 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.018486 | orchestrator | 18:40:54.017 STDOUT terraform:  + size = 20 2025-08-29 18:40:54.018490 | orchestrator | 18:40:54.017 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 18:40:54.018494 | orchestrator | 18:40:54.017 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 18:40:54.018498 | orchestrator | 18:40:54.017 STDOUT terraform:  } 2025-08-29 18:40:54.018502 | orchestrator | 18:40:54.017 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-08-29 18:40:54.018505 | orchestrator | 18:40:54.017 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 18:40:54.018894 | orchestrator | 18:40:54.017 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 18:40:54.018968 | orchestrator | 18:40:54.018 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 18:40:54.019045 | orchestrator | 18:40:54.018 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.019122 | orchestrator | 18:40:54.018 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 18:40:54.019127 | orchestrator | 18:40:54.019 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-08-29 18:40:54.019201 | orchestrator | 18:40:54.019 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.019207 | orchestrator | 18:40:54.019 STDOUT terraform:  + size = 20 2025-08-29 18:40:54.019278 | orchestrator | 18:40:54.019 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 18:40:54.019355 | orchestrator | 18:40:54.019 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 18:40:54.019361 | orchestrator | 18:40:54.019 STDOUT terraform:  } 2025-08-29 18:40:54.019433 | orchestrator | 18:40:54.019 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-08-29 18:40:54.019510 | orchestrator | 18:40:54.019 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-08-29 18:40:54.019516 | orchestrator | 18:40:54.019 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 18:40:54.019587 | orchestrator | 18:40:54.019 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 18:40:54.019593 | orchestrator | 18:40:54.019 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 18:40:54.019699 | orchestrator | 18:40:54.019 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 18:40:54.019706 | orchestrator | 18:40:54.019 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 18:40:54.019777 | orchestrator | 18:40:54.019 STDOUT terraform:  + config_drive = true 2025-08-29 18:40:54.019855 | orchestrator | 18:40:54.019 STDOUT terraform:  + created = (known after apply) 2025-08-29 18:40:54.019860 | orchestrator | 18:40:54.019 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 18:40:54.019932 | orchestrator | 18:40:54.019 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-08-29 18:40:54.019937 | orchestrator | 18:40:54.019 STDOUT terraform:  + force_delete = false 2025-08-29 18:40:54.020009 | orchestrator | 18:40:54.019 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 18:40:54.020015 | orchestrator | 18:40:54.019 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.020086 | orchestrator | 18:40:54.019 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 18:40:54.020164 | orchestrator | 18:40:54.020 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 18:40:54.020169 | orchestrator | 18:40:54.020 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 18:40:54.020244 | orchestrator | 18:40:54.020 STDOUT terraform:  + name = "testbed-manager" 2025-08-29 18:40:54.020250 | orchestrator | 18:40:54.020 STDOUT terraform:  + power_state = "active" 2025-08-29 18:40:54.020327 | orchestrator | 18:40:54.020 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.020409 | orchestrator | 18:40:54.020 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 18:40:54.020414 | orchestrator | 18:40:54.020 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 18:40:54.020486 | orchestrator | 18:40:54.020 STDOUT terraform:  + updated = (known after apply) 2025-08-29 18:40:54.020564 | orchestrator | 18:40:54.020 STDOUT terraform:  + user_data = (sensitive value) 2025-08-29 18:40:54.020570 | orchestrator | 18:40:54.020 STDOUT terraform:  + block_device { 2025-08-29 18:40:54.020574 | orchestrator | 18:40:54.020 STDOUT terraform:  + boot_index = 0 2025-08-29 18:40:54.020641 | orchestrator | 18:40:54.020 STDOUT terraform:  + delete_on_termination = false 2025-08-29 18:40:54.020650 | orchestrator | 18:40:54.020 STDOUT terraform:  + destination_type = "volume" 2025-08-29 18:40:54.020718 | orchestrator | 18:40:54.020 STDOUT terraform:  + multiattach = false 2025-08-29 18:40:54.020795 | orchestrator | 18:40:54.020 STDOUT terraform:  + source_type = "volume" 2025-08-29 18:40:54.020800 | orchestrator | 18:40:54.020 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 18:40:54.020873 | orchestrator | 18:40:54.020 STDOUT terraform:  } 2025-08-29 18:40:54.020878 | orchestrator | 18:40:54.020 STDOUT terraform:  + network { 2025-08-29 18:40:54.020882 | orchestrator | 18:40:54.020 STDOUT terraform:  + access_network = false 2025-08-29 18:40:54.020950 | orchestrator | 18:40:54.020 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 18:40:54.020958 | orchestrator | 18:40:54.020 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 18:40:54.020996 | orchestrator | 18:40:54.020 STDOUT terraform:  + mac = (known after apply) 2025-08-29 18:40:54.021054 | orchestrator | 18:40:54.021 STDOUT terraform:  + name = (known after apply) 2025-08-29 18:40:54.021107 | orchestrator | 18:40:54.021 STDOUT terraform:  + port = (known after apply) 2025-08-29 18:40:54.021146 | orchestrator | 18:40:54.021 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 18:40:54.021175 | orchestrator | 18:40:54.021 STDOUT terraform:  } 2025-08-29 18:40:54.021203 | orchestrator | 18:40:54.021 STDOUT terraform:  } 2025-08-29 18:40:54.021259 | orchestrator | 18:40:54.021 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-08-29 18:40:54.021314 | orchestrator | 18:40:54.021 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 18:40:54.021372 | orchestrator | 18:40:54.021 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 18:40:54.021422 | orchestrator | 18:40:54.021 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 18:40:54.021469 | orchestrator | 18:40:54.021 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 18:40:54.021524 | orchestrator | 18:40:54.021 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 18:40:54.021561 | orchestrator | 18:40:54.021 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 18:40:54.021606 | orchestrator | 18:40:54.021 STDOUT terraform:  + config_drive = true 2025-08-29 18:40:54.021646 | orchestrator | 18:40:54.021 STDOUT terraform:  + created = (known after apply) 2025-08-29 18:40:54.021730 | orchestrator | 18:40:54.021 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 18:40:54.021790 | orchestrator | 18:40:54.021 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 18:40:54.021823 | orchestrator | 18:40:54.021 STDOUT terraform:  + force_delete = false 2025-08-29 18:40:54.021878 | orchestrator | 18:40:54.021 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 18:40:54.021934 | orchestrator | 18:40:54.021 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.021977 | orchestrator | 18:40:54.021 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 18:40:54.022045 | orchestrator | 18:40:54.021 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 18:40:54.022092 | orchestrator | 18:40:54.022 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 18:40:54.022129 | orchestrator | 18:40:54.022 STDOUT terraform:  + name = "testbed-node-0" 2025-08-29 18:40:54.022172 | orchestrator | 18:40:54.022 STDOUT terraform:  + power_state = "active" 2025-08-29 18:40:54.022213 | orchestrator | 18:40:54.022 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.022267 | orchestrator | 18:40:54.022 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 18:40:54.022296 | orchestrator | 18:40:54.022 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 18:40:54.022350 | orchestrator | 18:40:54.022 STDOUT terraform:  + updated = (known after apply) 2025-08-29 18:40:54.022421 | orchestrator | 18:40:54.022 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 18:40:54.022452 | orchestrator | 18:40:54.022 STDOUT terraform:  + block_device { 2025-08-29 18:40:54.022504 | orchestrator | 18:40:54.022 STDOUT terraform:  + boot_index = 0 2025-08-29 18:40:54.022539 | orchestrator | 18:40:54.022 STDOUT terraform:  + delete_on_termination = false 2025-08-29 18:40:54.022587 | orchestrator | 18:40:54.022 STDOUT terraform:  + destination_type = "volume" 2025-08-29 18:40:54.022622 | orchestrator | 18:40:54.022 STDOUT terraform:  + multiattach = false 2025-08-29 18:40:54.022688 | orchestrator | 18:40:54.022 STDOUT terraform:  + source_type = "volume" 2025-08-29 18:40:54.022749 | orchestrator | 18:40:54.022 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 18:40:54.022770 | orchestrator | 18:40:54.022 STDOUT terraform:  } 2025-08-29 18:40:54.022805 | orchestrator | 18:40:54.022 STDOUT terraform:  + network { 2025-08-29 18:40:54.022835 | orchestrator | 18:40:54.022 STDOUT terraform:  + access_network = false 2025-08-29 18:40:54.022876 | orchestrator | 18:40:54.022 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 18:40:54.022928 | orchestrator | 18:40:54.022 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 18:40:54.022967 | orchestrator | 18:40:54.022 STDOUT terraform:  + mac = (known after apply) 2025-08-29 18:40:54.023007 | orchestrator | 18:40:54.022 STDOUT terraform:  + name = (known after apply) 2025-08-29 18:40:54.023059 | orchestrator | 18:40:54.023 STDOUT terraform:  + port = (known after apply) 2025-08-29 18:40:54.023097 | orchestrator | 18:40:54.023 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 18:40:54.023119 | orchestrator | 18:40:54.023 STDOUT terraform:  } 2025-08-29 18:40:54.023140 | orchestrator | 18:40:54.023 STDOUT terraform:  } 2025-08-29 18:40:54.023190 | orchestrator | 18:40:54.023 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-08-29 18:40:54.023246 | orchestrator | 18:40:54.023 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 18:40:54.023293 | orchestrator | 18:40:54.023 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 18:40:54.023346 | orchestrator | 18:40:54.023 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 18:40:54.023387 | orchestrator | 18:40:54.023 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 18:40:54.023428 | orchestrator | 18:40:54.023 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 18:40:54.023458 | orchestrator | 18:40:54.023 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 18:40:54.023485 | orchestrator | 18:40:54.023 STDOUT terraform:  + config_drive = true 2025-08-29 18:40:54.023525 | orchestrator | 18:40:54.023 STDOUT terraform:  + created = (known after apply) 2025-08-29 18:40:54.023565 | orchestrator | 18:40:54.023 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 18:40:54.023600 | orchestrator | 18:40:54.023 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 18:40:54.023644 | orchestrator | 18:40:54.023 STDOUT terraform:  + force_delete = false 2025-08-29 18:40:54.023708 | orchestrator | 18:40:54.023 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 18:40:54.023757 | orchestrator | 18:40:54.023 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.023797 | orchestrator | 18:40:54.023 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 18:40:54.023844 | orchestrator | 18:40:54.023 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 18:40:54.023877 | orchestrator | 18:40:54.023 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 18:40:54.023915 | orchestrator | 18:40:54.023 STDOUT terraform:  + name = "testbed-node-1" 2025-08-29 18:40:54.023945 | orchestrator | 18:40:54.023 STDOUT terraform:  + power_state = "active" 2025-08-29 18:40:54.023986 | orchestrator | 18:40:54.023 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.024026 | orchestrator | 18:40:54.023 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 18:40:54.024056 | orchestrator | 18:40:54.024 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 18:40:54.024098 | orchestrator | 18:40:54.024 STDOUT terraform:  + updated = (known after apply) 2025-08-29 18:40:54.024153 | orchestrator | 18:40:54.024 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 18:40:54.024176 | orchestrator | 18:40:54.024 STDOUT terraform:  + block_device { 2025-08-29 18:40:54.024213 | orchestrator | 18:40:54.024 STDOUT terraform:  + boot_index = 0 2025-08-29 18:40:54.024249 | orchestrator | 18:40:54.024 STDOUT terraform:  + delete_on_termination = false 2025-08-29 18:40:54.024284 | orchestrator | 18:40:54.024 STDOUT terraform:  + destination_type = "volume" 2025-08-29 18:40:54.024318 | orchestrator | 18:40:54.024 STDOUT terraform:  + multiattach = false 2025-08-29 18:40:54.024368 | orchestrator | 18:40:54.024 STDOUT terraform:  + source_type = "volume" 2025-08-29 18:40:54.024412 | orchestrator | 18:40:54.024 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 18:40:54.024447 | orchestrator | 18:40:54.024 STDOUT terraform:  } 2025-08-29 18:40:54.024469 | orchestrator | 18:40:54.024 STDOUT terraform:  + network { 2025-08-29 18:40:54.024496 | orchestrator | 18:40:54.024 STDOUT terraform:  + access_network = false 2025-08-29 18:40:54.024533 | orchestrator | 18:40:54.024 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 18:40:54.024570 | orchestrator | 18:40:54.024 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 18:40:54.024607 | orchestrator | 18:40:54.024 STDOUT terraform:  + mac = (known after apply) 2025-08-29 18:40:54.024666 | orchestrator | 18:40:54.024 STDOUT terraform:  + name = (known after apply) 2025-08-29 18:40:54.024709 | orchestrator | 18:40:54.024 STDOUT terraform:  + port = (known after apply) 2025-08-29 18:40:54.024747 | orchestrator | 18:40:54.024 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 18:40:54.024768 | orchestrator | 18:40:54.024 STDOUT terraform:  } 2025-08-29 18:40:54.024789 | orchestrator | 18:40:54.024 STDOUT terraform:  } 2025-08-29 18:40:54.024844 | orchestrator | 18:40:54.024 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-08-29 18:40:54.024893 | orchestrator | 18:40:54.024 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 18:40:54.024939 | orchestrator | 18:40:54.024 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 18:40:54.024979 | orchestrator | 18:40:54.024 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 18:40:54.025026 | orchestrator | 18:40:54.024 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 18:40:54.025073 | orchestrator | 18:40:54.025 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 18:40:54.025103 | orchestrator | 18:40:54.025 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 18:40:54.025131 | orchestrator | 18:40:54.025 STDOUT terraform:  + config_drive = true 2025-08-29 18:40:54.025172 | orchestrator | 18:40:54.025 STDOUT terraform:  + created = (known after apply) 2025-08-29 18:40:54.025212 | orchestrator | 18:40:54.025 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 18:40:54.025247 | orchestrator | 18:40:54.025 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 18:40:54.025283 | orchestrator | 18:40:54.025 STDOUT terraform:  + force_delete = false 2025-08-29 18:40:54.025324 | orchestrator | 18:40:54.025 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 18:40:54.025365 | orchestrator | 18:40:54.025 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.025405 | orchestrator | 18:40:54.025 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 18:40:54.025450 | orchestrator | 18:40:54.025 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 18:40:54.025481 | orchestrator | 18:40:54.025 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 18:40:54.025519 | orchestrator | 18:40:54.025 STDOUT terraform:  + name = "testbed-node-2" 2025-08-29 18:40:54.025549 | orchestrator | 18:40:54.025 STDOUT terraform:  + power_state = "active" 2025-08-29 18:40:54.025589 | orchestrator | 18:40:54.025 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.025628 | orchestrator | 18:40:54.025 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 18:40:54.025700 | orchestrator | 18:40:54.025 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 18:40:54.025747 | orchestrator | 18:40:54.025 STDOUT terraform:  + updated = (known after apply) 2025-08-29 18:40:54.025808 | orchestrator | 18:40:54.025 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 18:40:54.025833 | orchestrator | 18:40:54.025 STDOUT terraform:  + block_device { 2025-08-29 18:40:54.025867 | orchestrator | 18:40:54.025 STDOUT terraform:  + boot_index = 0 2025-08-29 18:40:54.025901 | orchestrator | 18:40:54.025 STDOUT terraform:  + delete_on_termination = false 2025-08-29 18:40:54.025936 | orchestrator | 18:40:54.025 STDOUT terraform:  + destination_type = "volume" 2025-08-29 18:40:54.025970 | orchestrator | 18:40:54.025 STDOUT terraform:  + multiattach = false 2025-08-29 18:40:54.026004 | orchestrator | 18:40:54.025 STDOUT terraform:  + source_type = "volume" 2025-08-29 18:40:54.026061 | orchestrator | 18:40:54.026 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 18:40:54.026090 | orchestrator | 18:40:54.026 STDOUT terraform:  } 2025-08-29 18:40:54.026112 | orchestrator | 18:40:54.026 STDOUT terraform:  + network { 2025-08-29 18:40:54.026141 | orchestrator | 18:40:54.026 STDOUT terraform:  + access_network = false 2025-08-29 18:40:54.026179 | orchestrator | 18:40:54.026 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 18:40:54.026217 | orchestrator | 18:40:54.026 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 18:40:54.026255 | orchestrator | 18:40:54.026 STDOUT terraform:  + mac = (known after apply) 2025-08-29 18:40:54.026294 | orchestrator | 18:40:54.026 STDOUT terraform:  + name = (known after apply) 2025-08-29 18:40:54.026331 | orchestrator | 18:40:54.026 STDOUT terraform:  + port = (known after apply) 2025-08-29 18:40:54.026370 | orchestrator | 18:40:54.026 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 18:40:54.026392 | orchestrator | 18:40:54.026 STDOUT terraform:  } 2025-08-29 18:40:54.026415 | orchestrator | 18:40:54.026 STDOUT terraform:  } 2025-08-29 18:40:54.026467 | orchestrator | 18:40:54.026 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-08-29 18:40:54.026516 | orchestrator | 18:40:54.026 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 18:40:54.026558 | orchestrator | 18:40:54.026 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 18:40:54.026601 | orchestrator | 18:40:54.026 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 18:40:54.026643 | orchestrator | 18:40:54.026 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 18:40:54.026696 | orchestrator | 18:40:54.026 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 18:40:54.026728 | orchestrator | 18:40:54.026 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 18:40:54.026757 | orchestrator | 18:40:54.026 STDOUT terraform:  + config_drive = true 2025-08-29 18:40:54.026800 | orchestrator | 18:40:54.026 STDOUT terraform:  + created = (known after apply) 2025-08-29 18:40:54.026846 | orchestrator | 18:40:54.026 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 18:40:54.026885 | orchestrator | 18:40:54.026 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 18:40:54.026917 | orchestrator | 18:40:54.026 STDOUT terraform:  + force_delete = false 2025-08-29 18:40:54.026960 | orchestrator | 18:40:54.026 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 18:40:54.027003 | orchestrator | 18:40:54.026 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.027045 | orchestrator | 18:40:54.027 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 18:40:54.027087 | orchestrator | 18:40:54.027 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 18:40:54.027121 | orchestrator | 18:40:54.027 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 18:40:54.027160 | orchestrator | 18:40:54.027 STDOUT terraform:  + name = "testbed-node-3" 2025-08-29 18:40:54.027197 | orchestrator | 18:40:54.027 STDOUT terraform:  + power_state = "active" 2025-08-29 18:40:54.027245 | orchestrator | 18:40:54.027 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.027288 | orchestrator | 18:40:54.027 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 18:40:54.027318 | orchestrator | 18:40:54.027 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 18:40:54.027359 | orchestrator | 18:40:54.027 STDOUT terraform:  + updated = (known after apply) 2025-08-29 18:40:54.027413 | orchestrator | 18:40:54.027 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 18:40:54.027438 | orchestrator | 18:40:54.027 STDOUT terraform:  + block_device { 2025-08-29 18:40:54.027469 | orchestrator | 18:40:54.027 STDOUT terraform:  + boot_index = 0 2025-08-29 18:40:54.027502 | orchestrator | 18:40:54.027 STDOUT terraform:  + delete_on_termination = false 2025-08-29 18:40:54.027537 | orchestrator | 18:40:54.027 STDOUT terraform:  + destination_type = "volume" 2025-08-29 18:40:54.027570 | orchestrator | 18:40:54.027 STDOUT terraform:  + multiattach = false 2025-08-29 18:40:54.027605 | orchestrator | 18:40:54.027 STDOUT terraform:  + source_type = "volume" 2025-08-29 18:40:54.027648 | orchestrator | 18:40:54.027 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 18:40:54.027701 | orchestrator | 18:40:54.027 STDOUT terraform:  } 2025-08-29 18:40:54.027724 | orchestrator | 18:40:54.027 STDOUT terraform:  + network { 2025-08-29 18:40:54.027752 | orchestrator | 18:40:54.027 STDOUT terraform:  + access_network = false 2025-08-29 18:40:54.027789 | orchestrator | 18:40:54.027 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 18:40:54.027828 | orchestrator | 18:40:54.027 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 18:40:54.027868 | orchestrator | 18:40:54.027 STDOUT terraform:  + mac = (known after apply) 2025-08-29 18:40:54.027907 | orchestrator | 18:40:54.027 STDOUT terraform:  + name = (known after apply) 2025-08-29 18:40:54.027945 | orchestrator | 18:40:54.027 STDOUT terraform:  + port = (known after apply) 2025-08-29 18:40:54.027984 | orchestrator | 18:40:54.027 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 18:40:54.028005 | orchestrator | 18:40:54.027 STDOUT terraform:  } 2025-08-29 18:40:54.028026 | orchestrator | 18:40:54.028 STDOUT terraform:  } 2025-08-29 18:40:54.028075 | orchestrator | 18:40:54.028 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-08-29 18:40:54.028132 | orchestrator | 18:40:54.028 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 18:40:54.028174 | orchestrator | 18:40:54.028 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 18:40:54.028214 | orchestrator | 18:40:54.028 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 18:40:54.028258 | orchestrator | 18:40:54.028 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 18:40:54.028308 | orchestrator | 18:40:54.028 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 18:40:54.028342 | orchestrator | 18:40:54.028 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 18:40:54.028374 | orchestrator | 18:40:54.028 STDOUT terraform:  + config_drive = true 2025-08-29 18:40:54.028414 | orchestrator | 18:40:54.028 STDOUT terraform:  + created = (known after apply) 2025-08-29 18:40:54.028455 | orchestrator | 18:40:54.028 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 18:40:54.028491 | orchestrator | 18:40:54.028 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 18:40:54.028522 | orchestrator | 18:40:54.028 STDOUT terraform:  + force_delete = false 2025-08-29 18:40:54.028563 | orchestrator | 18:40:54.028 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 18:40:54.028604 | orchestrator | 18:40:54.028 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.028645 | orchestrator | 18:40:54.028 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 18:40:54.028702 | orchestrator | 18:40:54.028 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 18:40:54.028735 | orchestrator | 18:40:54.028 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 18:40:54.028773 | orchestrator | 18:40:54.028 STDOUT terraform:  + name = "testbed-node-4" 2025-08-29 18:40:54.028804 | orchestrator | 18:40:54.028 STDOUT terraform:  + power_state = "active" 2025-08-29 18:40:54.028846 | orchestrator | 18:40:54.028 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.028885 | orchestrator | 18:40:54.028 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 18:40:54.028916 | orchestrator | 18:40:54.028 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 18:40:54.028956 | orchestrator | 18:40:54.028 STDOUT terraform:  + updated = (known after apply) 2025-08-29 18:40:54.029012 | orchestrator | 18:40:54.028 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 18:40:54.029035 | orchestrator | 18:40:54.029 STDOUT terraform:  + block_device { 2025-08-29 18:40:54.029066 | orchestrator | 18:40:54.029 STDOUT terraform:  + boot_index = 0 2025-08-29 18:40:54.029101 | orchestrator | 18:40:54.029 STDOUT terraform:  + delete_on_termination = false 2025-08-29 18:40:54.029137 | orchestrator | 18:40:54.029 STDOUT terraform:  + destination_type = "volume" 2025-08-29 18:40:54.029173 | orchestrator | 18:40:54.029 STDOUT terraform:  + multiattach = false 2025-08-29 18:40:54.029209 | orchestrator | 18:40:54.029 STDOUT terraform:  + source_type = "volume" 2025-08-29 18:40:54.029253 | orchestrator | 18:40:54.029 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 18:40:54.029275 | orchestrator | 18:40:54.029 STDOUT terraform:  } 2025-08-29 18:40:54.029297 | orchestrator | 18:40:54.029 STDOUT terraform:  + network { 2025-08-29 18:40:54.029324 | orchestrator | 18:40:54.029 STDOUT terraform:  + access_network = false 2025-08-29 18:40:54.029360 | orchestrator | 18:40:54.029 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 18:40:54.029398 | orchestrator | 18:40:54.029 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 18:40:54.029435 | orchestrator | 18:40:54.029 STDOUT terraform:  + mac = (known after apply) 2025-08-29 18:40:54.029479 | orchestrator | 18:40:54.029 STDOUT terraform:  + name = (known after apply) 2025-08-29 18:40:54.029516 | orchestrator | 18:40:54.029 STDOUT terraform:  + port = (known after apply) 2025-08-29 18:40:54.029555 | orchestrator | 18:40:54.029 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 18:40:54.029578 | orchestrator | 18:40:54.029 STDOUT terraform:  } 2025-08-29 18:40:54.029598 | orchestrator | 18:40:54.029 STDOUT terraform:  } 2025-08-29 18:40:54.029652 | orchestrator | 18:40:54.029 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-08-29 18:40:54.029716 | orchestrator | 18:40:54.029 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 18:40:54.029806 | orchestrator | 18:40:54.029 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 18:40:54.029850 | orchestrator | 18:40:54.029 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 18:40:54.029892 | orchestrator | 18:40:54.029 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 18:40:54.029936 | orchestrator | 18:40:54.029 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 18:40:54.029966 | orchestrator | 18:40:54.029 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 18:40:54.029995 | orchestrator | 18:40:54.029 STDOUT terraform:  + config_drive = true 2025-08-29 18:40:54.030050 | orchestrator | 18:40:54.030 STDOUT terraform:  + created = (known after apply) 2025-08-29 18:40:54.030094 | orchestrator | 18:40:54.030 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 18:40:54.030133 | orchestrator | 18:40:54.030 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 18:40:54.030168 | orchestrator | 18:40:54.030 STDOUT terraform:  + force_delete = false 2025-08-29 18:40:54.030209 | orchestrator | 18:40:54.030 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 18:40:54.030250 | orchestrator | 18:40:54.030 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.030291 | orchestrator | 18:40:54.030 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 18:40:54.030331 | orchestrator | 18:40:54.030 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 18:40:54.030362 | orchestrator | 18:40:54.030 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 18:40:54.030399 | orchestrator | 18:40:54.030 STDOUT terraform:  + name = "testbed-node-5" 2025-08-29 18:40:54.030429 | orchestrator | 18:40:54.030 STDOUT terraform:  + power_state = "active" 2025-08-29 18:40:54.030471 | orchestrator | 18:40:54.030 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.030517 | orchestrator | 18:40:54.030 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 18:40:54.030547 | orchestrator | 18:40:54.030 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 18:40:54.030589 | orchestrator | 18:40:54.030 STDOUT terraform:  + updated = (known after apply) 2025-08-29 18:40:54.030644 | orchestrator | 18:40:54.030 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 18:40:54.030694 | orchestrator | 18:40:54.030 STDOUT terraform:  + block_device { 2025-08-29 18:40:54.030732 | orchestrator | 18:40:54.030 STDOUT terraform:  + boot_index = 0 2025-08-29 18:40:54.030766 | orchestrator | 18:40:54.030 STDOUT terraform:  + delete_on_termination = false 2025-08-29 18:40:54.030801 | orchestrator | 18:40:54.030 STDOUT terraform:  + destination_type = "volume" 2025-08-29 18:40:54.030835 | orchestrator | 18:40:54.030 STDOUT terraform:  + multiattach = false 2025-08-29 18:40:54.030872 | orchestrator | 18:40:54.030 STDOUT terraform:  + source_type = "volume" 2025-08-29 18:40:54.030915 | orchestrator | 18:40:54.030 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 18:40:54.030939 | orchestrator | 18:40:54.030 STDOUT terraform:  } 2025-08-29 18:40:54.030960 | orchestrator | 18:40:54.030 STDOUT terraform:  + network { 2025-08-29 18:40:54.030988 | orchestrator | 18:40:54.030 STDOUT terraform:  + access_network = false 2025-08-29 18:40:54.031025 | orchestrator | 18:40:54.030 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 18:40:54.031062 | orchestrator | 18:40:54.031 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 18:40:54.031103 | orchestrator | 18:40:54.031 STDOUT terraform:  + mac = (known after apply) 2025-08-29 18:40:54.031143 | orchestrator | 18:40:54.031 STDOUT terraform:  + name = (known after apply) 2025-08-29 18:40:54.031179 | orchestrator | 18:40:54.031 STDOUT terraform:  + port = (known after apply) 2025-08-29 18:40:54.031215 | orchestrator | 18:40:54.031 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 18:40:54.031235 | orchestrator | 18:40:54.031 STDOUT terraform:  } 2025-08-29 18:40:54.031255 | orchestrator | 18:40:54.031 STDOUT terraform:  } 2025-08-29 18:40:54.031295 | orchestrator | 18:40:54.031 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-08-29 18:40:54.031336 | orchestrator | 18:40:54.031 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-08-29 18:40:54.031369 | orchestrator | 18:40:54.031 STDOUT terraform:  + fingerprint = (known after apply) 2025-08-29 18:40:54.031403 | orchestrator | 18:40:54.031 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.031430 | orchestrator | 18:40:54.031 STDOUT terraform:  + name = "testbed" 2025-08-29 18:40:54.031460 | orchestrator | 18:40:54.031 STDOUT terraform:  + private_key = (sensitive value) 2025-08-29 18:40:54.031495 | orchestrator | 18:40:54.031 STDOUT terraform:  + public_key = (known after apply) 2025-08-29 18:40:54.031529 | orchestrator | 18:40:54.031 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.031567 | orchestrator | 18:40:54.031 STDOUT terraform:  + user_id = (known after apply) 2025-08-29 18:40:54.031587 | orchestrator | 18:40:54.031 STDOUT terraform:  } 2025-08-29 18:40:54.031641 | orchestrator | 18:40:54.031 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-08-29 18:40:54.031718 | orchestrator | 18:40:54.031 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 18:40:54.031757 | orchestrator | 18:40:54.031 STDOUT terraform:  + device = (known after apply) 2025-08-29 18:40:54.031792 | orchestrator | 18:40:54.031 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.031831 | orchestrator | 18:40:54.031 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 18:40:54.031865 | orchestrator | 18:40:54.031 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.031901 | orchestrator | 18:40:54.031 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 18:40:54.031922 | orchestrator | 18:40:54.031 STDOUT terraform:  } 2025-08-29 18:40:54.031978 | orchestrator | 18:40:54.031 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-08-29 18:40:54.032031 | orchestrator | 18:40:54.031 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 18:40:54.032064 | orchestrator | 18:40:54.032 STDOUT terraform:  + device = (known after apply) 2025-08-29 18:40:54.032102 | orchestrator | 18:40:54.032 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.032137 | orchestrator | 18:40:54.032 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 18:40:54.032170 | orchestrator | 18:40:54.032 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.032203 | orchestrator | 18:40:54.032 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 18:40:54.032222 | orchestrator | 18:40:54.032 STDOUT terraform:  } 2025-08-29 18:40:54.032330 | orchestrator | 18:40:54.032 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-08-29 18:40:54.032385 | orchestrator | 18:40:54.032 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 18:40:54.032420 | orchestrator | 18:40:54.032 STDOUT terraform:  + device = (known after apply) 2025-08-29 18:40:54.032454 | orchestrator | 18:40:54.032 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.032487 | orchestrator | 18:40:54.032 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 18:40:54.032520 | orchestrator | 18:40:54.032 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.032553 | orchestrator | 18:40:54.032 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 18:40:54.032573 | orchestrator | 18:40:54.032 STDOUT terraform:  } 2025-08-29 18:40:54.032628 | orchestrator | 18:40:54.032 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-08-29 18:40:54.032709 | orchestrator | 18:40:54.032 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 18:40:54.032746 | orchestrator | 18:40:54.032 STDOUT terraform:  + device = (known after apply) 2025-08-29 18:40:54.032781 | orchestrator | 18:40:54.032 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.032815 | orchestrator | 18:40:54.032 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 18:40:54.032850 | orchestrator | 18:40:54.032 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.032884 | orchestrator | 18:40:54.032 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 18:40:54.032904 | orchestrator | 18:40:54.032 STDOUT terraform:  } 2025-08-29 18:40:54.032958 | orchestrator | 18:40:54.032 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-08-29 18:40:54.033019 | orchestrator | 18:40:54.032 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 18:40:54.033054 | orchestrator | 18:40:54.033 STDOUT terraform:  + device = (known after apply) 2025-08-29 18:40:54.033093 | orchestrator | 18:40:54.033 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.033130 | orchestrator | 18:40:54.033 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 18:40:54.033166 | orchestrator | 18:40:54.033 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.033200 | orchestrator | 18:40:54.033 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 18:40:54.033221 | orchestrator | 18:40:54.033 STDOUT terraform:  } 2025-08-29 18:40:54.033276 | orchestrator | 18:40:54.033 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-08-29 18:40:54.033330 | orchestrator | 18:40:54.033 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 18:40:54.033366 | orchestrator | 18:40:54.033 STDOUT terraform:  + device = (known after apply) 2025-08-29 18:40:54.033401 | orchestrator | 18:40:54.033 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.033435 | orchestrator | 18:40:54.033 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 18:40:54.033470 | orchestrator | 18:40:54.033 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.033546 | orchestrator | 18:40:54.033 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 18:40:54.033570 | orchestrator | 18:40:54.033 STDOUT terraform:  } 2025-08-29 18:40:54.033625 | orchestrator | 18:40:54.033 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-08-29 18:40:54.033694 | orchestrator | 18:40:54.033 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 18:40:54.033730 | orchestrator | 18:40:54.033 STDOUT terraform:  + device = (known after apply) 2025-08-29 18:40:54.033766 | orchestrator | 18:40:54.033 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.033803 | orchestrator | 18:40:54.033 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 18:40:54.033837 | orchestrator | 18:40:54.033 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.033872 | orchestrator | 18:40:54.033 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 18:40:54.033893 | orchestrator | 18:40:54.033 STDOUT terraform:  } 2025-08-29 18:40:54.033947 | orchestrator | 18:40:54.033 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-08-29 18:40:54.034002 | orchestrator | 18:40:54.033 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 18:40:54.034056 | orchestrator | 18:40:54.034 STDOUT terraform:  + device = (known after apply) 2025-08-29 18:40:54.034093 | orchestrator | 18:40:54.034 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.034127 | orchestrator | 18:40:54.034 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 18:40:54.034162 | orchestrator | 18:40:54.034 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.034204 | orchestrator | 18:40:54.034 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 18:40:54.034226 | orchestrator | 18:40:54.034 STDOUT terraform:  } 2025-08-29 18:40:54.034282 | orchestrator | 18:40:54.034 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-08-29 18:40:54.034336 | orchestrator | 18:40:54.034 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 18:40:54.034373 | orchestrator | 18:40:54.034 STDOUT terraform:  + device = (known after apply) 2025-08-29 18:40:54.034408 | orchestrator | 18:40:54.034 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.034443 | orchestrator | 18:40:54.034 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 18:40:54.034477 | orchestrator | 18:40:54.034 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.034516 | orchestrator | 18:40:54.034 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 18:40:54.034537 | orchestrator | 18:40:54.034 STDOUT terraform:  } 2025-08-29 18:40:54.034603 | orchestrator | 18:40:54.034 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-08-29 18:40:54.034693 | orchestrator | 18:40:54.034 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-08-29 18:40:54.034730 | orchestrator | 18:40:54.034 STDOUT terraform:  + fixed_ip = (known after apply) 2025-08-29 18:40:54.034765 | orchestrator | 18:40:54.034 STDOUT terraform:  + floating_ip = (known after apply) 2025-08-29 18:40:54.034801 | orchestrator | 18:40:54.034 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.034836 | orchestrator | 18:40:54.034 STDOUT terraform:  + port_id = (known after apply) 2025-08-29 18:40:54.034872 | orchestrator | 18:40:54.034 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.034893 | orchestrator | 18:40:54.034 STDOUT terraform:  } 2025-08-29 18:40:54.034946 | orchestrator | 18:40:54.034 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-08-29 18:40:54.035021 | orchestrator | 18:40:54.034 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-08-29 18:40:54.035054 | orchestrator | 18:40:54.035 STDOUT terraform:  + address = (known after apply) 2025-08-29 18:40:54.035105 | orchestrator | 18:40:54.035 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 18:40:54.035139 | orchestrator | 18:40:54.035 STDOUT terraform:  + dns_domain = (known after apply) 2025-08-29 18:40:54.035189 | orchestrator | 18:40:54.035 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 18:40:54.035223 | orchestrator | 18:40:54.035 STDOUT terraform:  + fixed_ip = (known after apply) 2025-08-29 18:40:54.035269 | orchestrator | 18:40:54.035 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.035297 | orchestrator | 18:40:54.035 STDOUT terraform:  + pool = "public" 2025-08-29 18:40:54.035345 | orchestrator | 18:40:54.035 STDOUT terraform:  + port_id = (known after apply) 2025-08-29 18:40:54.035379 | orchestrator | 18:40:54.035 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.035429 | orchestrator | 18:40:54.035 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 18:40:54.035484 | orchestrator | 18:40:54.035 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 18:40:54.035507 | orchestrator | 18:40:54.035 STDOUT terraform:  } 2025-08-29 18:40:54.035573 | orchestrator | 18:40:54.035 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-08-29 18:40:54.035639 | orchestrator | 18:40:54.035 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-08-29 18:40:54.035714 | orchestrator | 18:40:54.035 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 18:40:54.035760 | orchestrator | 18:40:54.035 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 18:40:54.035806 | orchestrator | 18:40:54.035 STDOUT terraform:  + availability_zone_hints = [ 2025-08-29 18:40:54.035829 | orchestrator | 18:40:54.035 STDOUT terraform:  + "nova", 2025-08-29 18:40:54.035852 | orchestrator | 18:40:54.035 STDOUT terraform:  ] 2025-08-29 18:40:54.035911 | orchestrator | 18:40:54.035 STDOUT terraform:  + dns_domain = (known after apply) 2025-08-29 18:40:54.035972 | orchestrator | 18:40:54.035 STDOUT terraform:  + external = (known after apply) 2025-08-29 18:40:54.036034 | orchestrator | 18:40:54.035 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.036080 | orchestrator | 18:40:54.036 STDOUT terraform:  + mtu = (known after apply) 2025-08-29 18:40:54.036139 | orchestrator | 18:40:54.036 STDOUT terraform:  + name = "net-testbed-management" 2025-08-29 18:40:54.036198 | orchestrator | 18:40:54.036 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 18:40:54.036256 | orchestrator | 18:40:54.036 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 18:40:54.036301 | orchestrator | 18:40:54.036 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.036360 | orchestrator | 18:40:54.036 STDOUT terraform:  + shared = (known after apply) 2025-08-29 18:40:54.036421 | orchestrator | 18:40:54.036 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 18:40:54.036523 | orchestrator | 18:40:54.036 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-08-29 18:40:54.036647 | orchestrator | 18:40:54.036 STDOUT terraform:  + segments (known after apply) 2025-08-29 18:40:54.036753 | orchestrator | 18:40:54.036 STDOUT terraform:  } 2025-08-29 18:40:54.036826 | orchestrator | 18:40:54.036 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-08-29 18:40:54.036897 | orchestrator | 18:40:54.036 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-08-29 18:40:54.036957 | orchestrator | 18:40:54.036 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 18:40:54.037016 | orchestrator | 18:40:54.036 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 18:40:54.037059 | orchestrator | 18:40:54.037 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 18:40:54.037123 | orchestrator | 18:40:54.037 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 18:40:54.037188 | orchestrator | 18:40:54.037 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 18:40:54.037257 | orchestrator | 18:40:54.037 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 18:40:54.037302 | orchestrator | 18:40:54.037 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 18:40:54.037360 | orchestrator | 18:40:54.037 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 18:40:54.037419 | orchestrator | 18:40:54.037 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.037462 | orchestrator | 18:40:54.037 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 18:40:54.037520 | orchestrator | 18:40:54.037 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 18:40:54.037579 | orchestrator | 18:40:54.037 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 18:40:54.037639 | orchestrator | 18:40:54.037 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 18:40:54.037701 | orchestrator | 18:40:54.037 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.037760 | orchestrator | 18:40:54.037 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 18:40:54.037817 | orchestrator | 18:40:54.037 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 18:40:54.037845 | orchestrator | 18:40:54.037 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 18:40:54.037895 | orchestrator | 18:40:54.037 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 18:40:54.037918 | orchestrator | 18:40:54.037 STDOUT terraform:  } 2025-08-29 18:40:54.037956 | orchestrator | 18:40:54.037 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 18:40:54.037992 | orchestrator | 18:40:54.037 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 18:40:54.038033 | orchestrator | 18:40:54.038 STDOUT terraform:  } 2025-08-29 18:40:54.038067 | orchestrator | 18:40:54.038 STDOUT terraform:  + binding (known after apply) 2025-08-29 18:40:54.038090 | orchestrator | 18:40:54.038 STDOUT terraform:  + fixed_ip { 2025-08-29 18:40:54.038137 | orchestrator | 18:40:54.038 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-08-29 18:40:54.038188 | orchestrator | 18:40:54.038 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 18:40:54.038210 | orchestrator | 18:40:54.038 STDOUT terraform:  } 2025-08-29 18:40:54.038231 | orchestrator | 18:40:54.038 STDOUT terraform:  } 2025-08-29 18:40:54.038298 | orchestrator | 18:40:54.038 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-08-29 18:40:54.038366 | orchestrator | 18:40:54.038 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 18:40:54.038433 | orchestrator | 18:40:54.038 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 18:40:54.038481 | orchestrator | 18:40:54.038 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 18:40:54.038540 | orchestrator | 18:40:54.038 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 18:40:54.038602 | orchestrator | 18:40:54.038 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 18:40:54.038673 | orchestrator | 18:40:54.038 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 18:40:54.038737 | orchestrator | 18:40:54.038 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 18:40:54.038780 | orchestrator | 18:40:54.038 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 18:40:54.038837 | orchestrator | 18:40:54.038 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 18:40:54.038894 | orchestrator | 18:40:54.038 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.038937 | orchestrator | 18:40:54.038 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 18:40:54.038994 | orchestrator | 18:40:54.038 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 18:40:54.039038 | orchestrator | 18:40:54.039 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 18:40:54.039095 | orchestrator | 18:40:54.039 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 18:40:54.039138 | orchestrator | 18:40:54.039 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.039181 | orchestrator | 18:40:54.039 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 18:40:54.039239 | orchestrator | 18:40:54.039 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 18:40:54.039265 | orchestrator | 18:40:54.039 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 18:40:54.039301 | orchestrator | 18:40:54.039 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 18:40:54.039322 | orchestrator | 18:40:54.039 STDOUT terraform:  } 2025-08-29 18:40:54.039348 | orchestrator | 18:40:54.039 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 18:40:54.039383 | orchestrator | 18:40:54.039 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 18:40:54.039418 | orchestrator | 18:40:54.039 STDOUT terraform:  } 2025-08-29 18:40:54.039448 | orchestrator | 18:40:54.039 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 18:40:54.039484 | orchestrator | 18:40:54.039 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 18:40:54.039506 | orchestrator | 18:40:54.039 STDOUT terraform:  } 2025-08-29 18:40:54.039533 | orchestrator | 18:40:54.039 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 18:40:54.039569 | orchestrator | 18:40:54.039 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 18:40:54.039591 | orchestrator | 18:40:54.039 STDOUT terraform:  } 2025-08-29 18:40:54.039622 | orchestrator | 18:40:54.039 STDOUT terraform:  + binding (known after apply) 2025-08-29 18:40:54.039677 | orchestrator | 18:40:54.039 STDOUT terraform:  + fixed_ip { 2025-08-29 18:40:54.039712 | orchestrator | 18:40:54.039 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-08-29 18:40:54.039749 | orchestrator | 18:40:54.039 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 18:40:54.039770 | orchestrator | 18:40:54.039 STDOUT terraform:  } 2025-08-29 18:40:54.039801 | orchestrator | 18:40:54.039 STDOUT terraform:  } 2025-08-29 18:40:54.039855 | orchestrator | 18:40:54.039 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-08-29 18:40:54.039913 | orchestrator | 18:40:54.039 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 18:40:54.039964 | orchestrator | 18:40:54.039 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 18:40:54.040016 | orchestrator | 18:40:54.039 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 18:40:54.040059 | orchestrator | 18:40:54.040 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 18:40:54.040107 | orchestrator | 18:40:54.040 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 18:40:54.040152 | orchestrator | 18:40:54.040 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 18:40:54.040193 | orchestrator | 18:40:54.040 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 18:40:54.040234 | orchestrator | 18:40:54.040 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 18:40:54.040278 | orchestrator | 18:40:54.040 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 18:40:54.040319 | orchestrator | 18:40:54.040 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.040360 | orchestrator | 18:40:54.040 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 18:40:54.040403 | orchestrator | 18:40:54.040 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 18:40:54.040444 | orchestrator | 18:40:54.040 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 18:40:54.040485 | orchestrator | 18:40:54.040 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 18:40:54.040527 | orchestrator | 18:40:54.040 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.040569 | orchestrator | 18:40:54.040 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 18:40:54.040611 | orchestrator | 18:40:54.040 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 18:40:54.040637 | orchestrator | 18:40:54.040 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 18:40:54.040712 | orchestrator | 18:40:54.040 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 18:40:54.040736 | orchestrator | 18:40:54.040 STDOUT terraform:  } 2025-08-29 18:40:54.040763 | orchestrator | 18:40:54.040 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 18:40:54.040800 | orchestrator | 18:40:54.040 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 18:40:54.040823 | orchestrator | 18:40:54.040 STDOUT terraform:  } 2025-08-29 18:40:54.040850 | orchestrator | 18:40:54.040 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 18:40:54.040886 | orchestrator | 18:40:54.040 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 18:40:54.040907 | orchestrator | 18:40:54.040 STDOUT terraform:  } 2025-08-29 18:40:54.040934 | orchestrator | 18:40:54.040 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 18:40:54.040968 | orchestrator | 18:40:54.040 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 18:40:54.040990 | orchestrator | 18:40:54.040 STDOUT terraform:  } 2025-08-29 18:40:54.041020 | orchestrator | 18:40:54.040 STDOUT terraform:  + binding (known after apply) 2025-08-29 18:40:54.041041 | orchestrator | 18:40:54.041 STDOUT terraform:  + fixed_ip { 2025-08-29 18:40:54.041078 | orchestrator | 18:40:54.041 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-08-29 18:40:54.041116 | orchestrator | 18:40:54.041 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 18:40:54.041136 | orchestrator | 18:40:54.041 STDOUT terraform:  } 2025-08-29 18:40:54.041157 | orchestrator | 18:40:54.041 STDOUT terraform:  } 2025-08-29 18:40:54.041215 | orchestrator | 18:40:54.041 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-08-29 18:40:54.041268 | orchestrator | 18:40:54.041 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 18:40:54.041310 | orchestrator | 18:40:54.041 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 18:40:54.041353 | orchestrator | 18:40:54.041 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 18:40:54.041398 | orchestrator | 18:40:54.041 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 18:40:54.041441 | orchestrator | 18:40:54.041 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 18:40:54.041485 | orchestrator | 18:40:54.041 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 18:40:54.041527 | orchestrator | 18:40:54.041 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 18:40:54.041570 | orchestrator | 18:40:54.041 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 18:40:54.041614 | orchestrator | 18:40:54.041 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 18:40:54.041671 | orchestrator | 18:40:54.041 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.041714 | orchestrator | 18:40:54.041 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 18:40:54.041757 | orchestrator | 18:40:54.041 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 18:40:54.041800 | orchestrator | 18:40:54.041 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 18:40:54.041842 | orchestrator | 18:40:54.041 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 18:40:54.041885 | orchestrator | 18:40:54.041 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.041927 | orchestrator | 18:40:54.041 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 18:40:54.041976 | orchestrator | 18:40:54.041 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 18:40:54.042006 | orchestrator | 18:40:54.041 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 18:40:54.042058 | orchestrator | 18:40:54.042 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 18:40:54.042079 | orchestrator | 18:40:54.042 STDOUT terraform:  } 2025-08-29 18:40:54.042105 | orchestrator | 18:40:54.042 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 18:40:54.042139 | orchestrator | 18:40:54.042 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 18:40:54.042159 | orchestrator | 18:40:54.042 STDOUT terraform:  } 2025-08-29 18:40:54.042185 | orchestrator | 18:40:54.042 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 18:40:54.042219 | orchestrator | 18:40:54.042 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 18:40:54.042244 | orchestrator | 18:40:54.042 STDOUT terraform:  } 2025-08-29 18:40:54.042270 | orchestrator | 18:40:54.042 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 18:40:54.042306 | orchestrator | 18:40:54.042 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 18:40:54.042326 | orchestrator | 18:40:54.042 STDOUT terraform:  } 2025-08-29 18:40:54.042356 | orchestrator | 18:40:54.042 STDOUT terraform:  + binding (known after apply) 2025-08-29 18:40:54.042377 | orchestrator | 18:40:54.042 STDOUT terraform:  + fixed_ip { 2025-08-29 18:40:54.042408 | orchestrator | 18:40:54.042 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-08-29 18:40:54.042443 | orchestrator | 18:40:54.042 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 18:40:54.042464 | orchestrator | 18:40:54.042 STDOUT terraform:  } 2025-08-29 18:40:54.042484 | orchestrator | 18:40:54.042 STDOUT terraform:  } 2025-08-29 18:40:54.042535 | orchestrator | 18:40:54.042 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-08-29 18:40:54.042595 | orchestrator | 18:40:54.042 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 18:40:54.042645 | orchestrator | 18:40:54.042 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 18:40:54.042703 | orchestrator | 18:40:54.042 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 18:40:54.042745 | orchestrator | 18:40:54.042 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 18:40:54.042787 | orchestrator | 18:40:54.042 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 18:40:54.042835 | orchestrator | 18:40:54.042 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 18:40:54.042899 | orchestrator | 18:40:54.042 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 18:40:54.042942 | orchestrator | 18:40:54.042 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 18:40:54.042984 | orchestrator | 18:40:54.042 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 18:40:54.043030 | orchestrator | 18:40:54.042 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.043078 | orchestrator | 18:40:54.043 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 18:40:54.043119 | orchestrator | 18:40:54.043 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 18:40:54.043160 | orchestrator | 18:40:54.043 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 18:40:54.043203 | orchestrator | 18:40:54.043 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 18:40:54.043246 | orchestrator | 18:40:54.043 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.043289 | orchestrator | 18:40:54.043 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 18:40:54.043332 | orchestrator | 18:40:54.043 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 18:40:54.043368 | orchestrator | 18:40:54.043 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 18:40:54.043406 | orchestrator | 18:40:54.043 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 18:40:54.043434 | orchestrator | 18:40:54.043 STDOUT terraform:  } 2025-08-29 18:40:54.043461 | orchestrator | 18:40:54.043 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 18:40:54.043498 | orchestrator | 18:40:54.043 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 18:40:54.043519 | orchestrator | 18:40:54.043 STDOUT terraform:  } 2025-08-29 18:40:54.043545 | orchestrator | 18:40:54.043 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 18:40:54.043592 | orchestrator | 18:40:54.043 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 18:40:54.043613 | orchestrator | 18:40:54.043 STDOUT terraform:  } 2025-08-29 18:40:54.043639 | orchestrator | 18:40:54.043 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 18:40:54.043703 | orchestrator | 18:40:54.043 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 18:40:54.043726 | orchestrator | 18:40:54.043 STDOUT terraform:  } 2025-08-29 18:40:54.043756 | orchestrator | 18:40:54.043 STDOUT terraform:  + binding (known after apply) 2025-08-29 18:40:54.043780 | orchestrator | 18:40:54.043 STDOUT terraform:  + fixed_ip { 2025-08-29 18:40:54.043812 | orchestrator | 18:40:54.043 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-08-29 18:40:54.043848 | orchestrator | 18:40:54.043 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 18:40:54.043869 | orchestrator | 18:40:54.043 STDOUT terraform:  } 2025-08-29 18:40:54.043891 | orchestrator | 18:40:54.043 STDOUT terraform:  } 2025-08-29 18:40:54.043943 | orchestrator | 18:40:54.043 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-08-29 18:40:54.043995 | orchestrator | 18:40:54.043 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 18:40:54.044044 | orchestrator | 18:40:54.044 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 18:40:54.044096 | orchestrator | 18:40:54.044 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 18:40:54.044140 | orchestrator | 18:40:54.044 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 18:40:54.044183 | orchestrator | 18:40:54.044 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 18:40:54.044225 | orchestrator | 18:40:54.044 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 18:40:54.044269 | orchestrator | 18:40:54.044 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 18:40:54.044311 | orchestrator | 18:40:54.044 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 18:40:54.044354 | orchestrator | 18:40:54.044 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 18:40:54.044396 | orchestrator | 18:40:54.044 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.044438 | orchestrator | 18:40:54.044 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 18:40:54.044480 | orchestrator | 18:40:54.044 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 18:40:54.044520 | orchestrator | 18:40:54.044 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 18:40:54.044570 | orchestrator | 18:40:54.044 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 18:40:54.044613 | orchestrator | 18:40:54.044 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.044671 | orchestrator | 18:40:54.044 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 18:40:54.044715 | orchestrator | 18:40:54.044 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 18:40:54.044741 | orchestrator | 18:40:54.044 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 18:40:54.044777 | orchestrator | 18:40:54.044 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 18:40:54.044798 | orchestrator | 18:40:54.044 STDOUT terraform:  } 2025-08-29 18:40:54.044824 | orchestrator | 18:40:54.044 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 18:40:54.044861 | orchestrator | 18:40:54.044 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 18:40:54.044881 | orchestrator | 18:40:54.044 STDOUT terraform:  } 2025-08-29 18:40:54.044909 | orchestrator | 18:40:54.044 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 18:40:54.044944 | orchestrator | 18:40:54.044 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 18:40:54.044965 | orchestrator | 18:40:54.044 STDOUT terraform:  } 2025-08-29 18:40:54.045001 | orchestrator | 18:40:54.044 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 18:40:54.045037 | orchestrator | 18:40:54.045 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 18:40:54.045058 | orchestrator | 18:40:54.045 STDOUT terraform:  } 2025-08-29 18:40:54.045087 | orchestrator | 18:40:54.045 STDOUT terraform:  + binding (known after apply) 2025-08-29 18:40:54.045108 | orchestrator | 18:40:54.045 STDOUT terraform:  + fixed_ip { 2025-08-29 18:40:54.045138 | orchestrator | 18:40:54.045 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-08-29 18:40:54.045173 | orchestrator | 18:40:54.045 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 18:40:54.045193 | orchestrator | 18:40:54.045 STDOUT terraform:  } 2025-08-29 18:40:54.045213 | orchestrator | 18:40:54.045 STDOUT terraform:  } 2025-08-29 18:40:54.045266 | orchestrator | 18:40:54.045 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-08-29 18:40:54.045318 | orchestrator | 18:40:54.045 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 18:40:54.045360 | orchestrator | 18:40:54.045 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 18:40:54.045404 | orchestrator | 18:40:54.045 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 18:40:54.045444 | orchestrator | 18:40:54.045 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 18:40:54.045485 | orchestrator | 18:40:54.045 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 18:40:54.045527 | orchestrator | 18:40:54.045 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 18:40:54.045574 | orchestrator | 18:40:54.045 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 18:40:54.045617 | orchestrator | 18:40:54.045 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 18:40:54.045682 | orchestrator | 18:40:54.045 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 18:40:54.045739 | orchestrator | 18:40:54.045 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.045783 | orchestrator | 18:40:54.045 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 18:40:54.045826 | orchestrator | 18:40:54.045 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 18:40:54.045867 | orchestrator | 18:40:54.045 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 18:40:54.045910 | orchestrator | 18:40:54.045 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 18:40:54.045961 | orchestrator | 18:40:54.045 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.046004 | orchestrator | 18:40:54.045 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 18:40:54.046061 | orchestrator | 18:40:54.046 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 18:40:54.046089 | orchestrator | 18:40:54.046 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 18:40:54.046139 | orchestrator | 18:40:54.046 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 18:40:54.046162 | orchestrator | 18:40:54.046 STDOUT terraform:  } 2025-08-29 18:40:54.046188 | orchestrator | 18:40:54.046 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 18:40:54.046229 | orchestrator | 18:40:54.046 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 18:40:54.046251 | orchestrator | 18:40:54.046 STDOUT terraform:  } 2025-08-29 18:40:54.046277 | orchestrator | 18:40:54.046 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 18:40:54.046312 | orchestrator | 18:40:54.046 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 18:40:54.046333 | orchestrator | 18:40:54.046 STDOUT terraform:  } 2025-08-29 18:40:54.046358 | orchestrator | 18:40:54.046 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 18:40:54.046392 | orchestrator | 18:40:54.046 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 18:40:54.046412 | orchestrator | 18:40:54.046 STDOUT terraform:  } 2025-08-29 18:40:54.046443 | orchestrator | 18:40:54.046 STDOUT terraform:  + binding (known after apply) 2025-08-29 18:40:54.046464 | orchestrator | 18:40:54.046 STDOUT terraform:  + fixed_ip { 2025-08-29 18:40:54.046496 | orchestrator | 18:40:54.046 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-08-29 18:40:54.046531 | orchestrator | 18:40:54.046 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 18:40:54.046550 | orchestrator | 18:40:54.046 STDOUT terraform:  } 2025-08-29 18:40:54.046570 | orchestrator | 18:40:54.046 STDOUT terraform:  } 2025-08-29 18:40:54.046622 | orchestrator | 18:40:54.046 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-08-29 18:40:54.046691 | orchestrator | 18:40:54.046 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-08-29 18:40:54.046718 | orchestrator | 18:40:54.046 STDOUT terraform:  + force_destroy = false 2025-08-29 18:40:54.046759 | orchestrator | 18:40:54.046 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.046818 | orchestrator | 18:40:54.046 STDOUT terraform:  + port_id = (known after apply) 2025-08-29 18:40:54.046870 | orchestrator | 18:40:54.046 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.046925 | orchestrator | 18:40:54.046 STDOUT terraform:  + router_id = (known after apply) 2025-08-29 18:40:54.046968 | orchestrator | 18:40:54.046 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 18:40:54.046989 | orchestrator | 18:40:54.046 STDOUT terraform:  } 2025-08-29 18:40:54.047031 | orchestrator | 18:40:54.046 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-08-29 18:40:54.047074 | orchestrator | 18:40:54.047 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-08-29 18:40:54.047118 | orchestrator | 18:40:54.047 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 18:40:54.047160 | orchestrator | 18:40:54.047 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 18:40:54.047190 | orchestrator | 18:40:54.047 STDOUT terraform:  + availability_zone_hints = [ 2025-08-29 18:40:54.047215 | orchestrator | 18:40:54.047 STDOUT terraform:  + "nova", 2025-08-29 18:40:54.047238 | orchestrator | 18:40:54.047 STDOUT terraform:  ] 2025-08-29 18:40:54.047281 | orchestrator | 18:40:54.047 STDOUT terraform:  + distributed = (known after apply) 2025-08-29 18:40:54.047323 | orchestrator | 18:40:54.047 STDOUT terraform:  + enable_snat = (known after apply) 2025-08-29 18:40:54.047377 | orchestrator | 18:40:54.047 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-08-29 18:40:54.047422 | orchestrator | 18:40:54.047 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-08-29 18:40:54.047465 | orchestrator | 18:40:54.047 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.047500 | orchestrator | 18:40:54.047 STDOUT terraform:  + name = "testbed" 2025-08-29 18:40:54.047552 | orchestrator | 18:40:54.047 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.047597 | orchestrator | 18:40:54.047 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 18:40:54.047631 | orchestrator | 18:40:54.047 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-08-29 18:40:54.047680 | orchestrator | 18:40:54.047 STDOUT terraform:  } 2025-08-29 18:40:54.047755 | orchestrator | 18:40:54.047 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-08-29 18:40:54.047817 | orchestrator | 18:40:54.047 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-08-29 18:40:54.047848 | orchestrator | 18:40:54.047 STDOUT terraform:  + description = "ssh" 2025-08-29 18:40:54.047883 | orchestrator | 18:40:54.047 STDOUT terraform:  + direction = "ingress" 2025-08-29 18:40:54.047914 | orchestrator | 18:40:54.047 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 18:40:54.047956 | orchestrator | 18:40:54.047 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.047986 | orchestrator | 18:40:54.047 STDOUT terraform:  + port_range_max = 22 2025-08-29 18:40:54.048015 | orchestrator | 18:40:54.047 STDOUT terraform:  + port_range_min = 22 2025-08-29 18:40:54.048054 | orchestrator | 18:40:54.048 STDOUT terraform:  + protocol = "tcp" 2025-08-29 18:40:54.048097 | orchestrator | 18:40:54.048 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.048141 | orchestrator | 18:40:54.048 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 18:40:54.048182 | orchestrator | 18:40:54.048 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 18:40:54.048218 | orchestrator | 18:40:54.048 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 18:40:54.048261 | orchestrator | 18:40:54.048 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 18:40:54.048303 | orchestrator | 18:40:54.048 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 18:40:54.048330 | orchestrator | 18:40:54.048 STDOUT terraform:  } 2025-08-29 18:40:54.048390 | orchestrator | 18:40:54.048 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-08-29 18:40:54.048450 | orchestrator | 18:40:54.048 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-08-29 18:40:54.048485 | orchestrator | 18:40:54.048 STDOUT terraform:  + description = "wireguard" 2025-08-29 18:40:54.048521 | orchestrator | 18:40:54.048 STDOUT terraform:  + direction = "ingress" 2025-08-29 18:40:54.048553 | orchestrator | 18:40:54.048 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 18:40:54.048596 | orchestrator | 18:40:54.048 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.048626 | orchestrator | 18:40:54.048 STDOUT terraform:  + port_range_max = 51820 2025-08-29 18:40:54.048671 | orchestrator | 18:40:54.048 STDOUT terraform:  + port_range_min = 51820 2025-08-29 18:40:54.048704 | orchestrator | 18:40:54.048 STDOUT terraform:  + protocol = "udp" 2025-08-29 18:40:54.048749 | orchestrator | 18:40:54.048 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.048792 | orchestrator | 18:40:54.048 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 18:40:54.048834 | orchestrator | 18:40:54.048 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 18:40:54.048871 | orchestrator | 18:40:54.048 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 18:40:54.048916 | orchestrator | 18:40:54.048 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 18:40:54.048960 | orchestrator | 18:40:54.048 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 18:40:54.048983 | orchestrator | 18:40:54.048 STDOUT terraform:  } 2025-08-29 18:40:54.049042 | orchestrator | 18:40:54.048 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-08-29 18:40:54.049102 | orchestrator | 18:40:54.049 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-08-29 18:40:54.049138 | orchestrator | 18:40:54.049 STDOUT terraform:  + direction = "ingress" 2025-08-29 18:40:54.049169 | orchestrator | 18:40:54.049 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 18:40:54.049216 | orchestrator | 18:40:54.049 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.049248 | orchestrator | 18:40:54.049 STDOUT terraform:  + protocol = "tcp" 2025-08-29 18:40:54.049291 | orchestrator | 18:40:54.049 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.049336 | orchestrator | 18:40:54.049 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 18:40:54.049378 | orchestrator | 18:40:54.049 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 18:40:54.049421 | orchestrator | 18:40:54.049 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-08-29 18:40:54.049463 | orchestrator | 18:40:54.049 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 18:40:54.049505 | orchestrator | 18:40:54.049 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 18:40:54.049526 | orchestrator | 18:40:54.049 STDOUT terraform:  } 2025-08-29 18:40:54.049594 | orchestrator | 18:40:54.049 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-08-29 18:40:54.049668 | orchestrator | 18:40:54.049 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-08-29 18:40:54.049705 | orchestrator | 18:40:54.049 STDOUT terraform:  + direction = "ingress" 2025-08-29 18:40:54.049736 | orchestrator | 18:40:54.049 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 18:40:54.049780 | orchestrator | 18:40:54.049 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.049814 | orchestrator | 18:40:54.049 STDOUT terraform:  + protocol = "udp" 2025-08-29 18:40:54.049859 | orchestrator | 18:40:54.049 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.049901 | orchestrator | 18:40:54.049 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 18:40:54.049943 | orchestrator | 18:40:54.049 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 18:40:54.049984 | orchestrator | 18:40:54.049 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-08-29 18:40:54.050043 | orchestrator | 18:40:54.049 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 18:40:54.050089 | orchestrator | 18:40:54.050 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 18:40:54.050111 | orchestrator | 18:40:54.050 STDOUT terraform:  } 2025-08-29 18:40:54.050171 | orchestrator | 18:40:54.050 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-08-29 18:40:54.050233 | orchestrator | 18:40:54.050 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-08-29 18:40:54.050271 | orchestrator | 18:40:54.050 STDOUT terraform:  + direction = "ingress" 2025-08-29 18:40:54.050304 | orchestrator | 18:40:54.050 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 18:40:54.050350 | orchestrator | 18:40:54.050 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.050384 | orchestrator | 18:40:54.050 STDOUT terraform:  + protocol = "icmp" 2025-08-29 18:40:54.050434 | orchestrator | 18:40:54.050 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.050477 | orchestrator | 18:40:54.050 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 18:40:54.050521 | orchestrator | 18:40:54.050 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 18:40:54.050559 | orchestrator | 18:40:54.050 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 18:40:54.050620 | orchestrator | 18:40:54.050 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 18:40:54.050681 | orchestrator | 18:40:54.050 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 18:40:54.050704 | orchestrator | 18:40:54.050 STDOUT terraform:  } 2025-08-29 18:40:54.050763 | orchestrator | 18:40:54.050 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-08-29 18:40:54.050823 | orchestrator | 18:40:54.050 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-08-29 18:40:54.050862 | orchestrator | 18:40:54.050 STDOUT terraform:  + direction = "ingress" 2025-08-29 18:40:54.050897 | orchestrator | 18:40:54.050 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 18:40:54.050942 | orchestrator | 18:40:54.050 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.050975 | orchestrator | 18:40:54.050 STDOUT terraform:  + protocol = "tcp" 2025-08-29 18:40:54.051020 | orchestrator | 18:40:54.050 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.051063 | orchestrator | 18:40:54.051 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 18:40:54.051107 | orchestrator | 18:40:54.051 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 18:40:54.051145 | orchestrator | 18:40:54.051 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 18:40:54.051189 | orchestrator | 18:40:54.051 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 18:40:54.051233 | orchestrator | 18:40:54.051 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 18:40:54.051257 | orchestrator | 18:40:54.051 STDOUT terraform:  } 2025-08-29 18:40:54.051328 | orchestrator | 18:40:54.051 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-08-29 18:40:54.051387 | orchestrator | 18:40:54.051 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-08-29 18:40:54.051424 | orchestrator | 18:40:54.051 STDOUT terraform:  + direction = "ingress" 2025-08-29 18:40:54.051457 | orchestrator | 18:40:54.051 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 18:40:54.051502 | orchestrator | 18:40:54.051 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.051535 | orchestrator | 18:40:54.051 STDOUT terraform:  + protocol = "udp" 2025-08-29 18:40:54.051578 | orchestrator | 18:40:54.051 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.051622 | orchestrator | 18:40:54.051 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 18:40:54.051698 | orchestrator | 18:40:54.051 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 18:40:54.051742 | orchestrator | 18:40:54.051 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 18:40:54.051785 | orchestrator | 18:40:54.051 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 18:40:54.051828 | orchestrator | 18:40:54.051 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 18:40:54.051849 | orchestrator | 18:40:54.051 STDOUT terraform:  } 2025-08-29 18:40:54.051909 | orchestrator | 18:40:54.051 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-08-29 18:40:54.051967 | orchestrator | 18:40:54.051 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-08-29 18:40:54.052004 | orchestrator | 18:40:54.051 STDOUT terraform:  + direction = "ingress" 2025-08-29 18:40:54.052039 | orchestrator | 18:40:54.052 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 18:40:54.052084 | orchestrator | 18:40:54.052 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.052127 | orchestrator | 18:40:54.052 STDOUT terraform:  + protocol = "icmp" 2025-08-29 18:40:54.052172 | orchestrator | 18:40:54.052 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.052368 | orchestrator | 18:40:54.052 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 18:40:54.052419 | orchestrator | 18:40:54.052 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 18:40:54.052457 | orchestrator | 18:40:54.052 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 18:40:54.052500 | orchestrator | 18:40:54.052 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 18:40:54.052544 | orchestrator | 18:40:54.052 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 18:40:54.052565 | orchestrator | 18:40:54.052 STDOUT terraform:  } 2025-08-29 18:40:54.052622 | orchestrator | 18:40:54.052 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-08-29 18:40:54.052701 | orchestrator | 18:40:54.052 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-08-29 18:40:54.052734 | orchestrator | 18:40:54.052 STDOUT terraform:  + description = "vrrp" 2025-08-29 18:40:54.052769 | orchestrator | 18:40:54.052 STDOUT terraform:  + direction = "ingress" 2025-08-29 18:40:54.052801 | orchestrator | 18:40:54.052 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 18:40:54.052847 | orchestrator | 18:40:54.052 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.052879 | orchestrator | 18:40:54.052 STDOUT terraform:  + protocol = "112" 2025-08-29 18:40:54.052922 | orchestrator | 18:40:54.052 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.052967 | orchestrator | 18:40:54.052 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 18:40:54.053011 | orchestrator | 18:40:54.052 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 18:40:54.053051 | orchestrator | 18:40:54.053 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 18:40:54.053102 | orchestrator | 18:40:54.053 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 18:40:54.053145 | orchestrator | 18:40:54.053 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 18:40:54.053166 | orchestrator | 18:40:54.053 STDOUT terraform:  } 2025-08-29 18:40:54.053222 | orchestrator | 18:40:54.053 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-08-29 18:40:54.053279 | orchestrator | 18:40:54.053 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-08-29 18:40:54.053319 | orchestrator | 18:40:54.053 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 18:40:54.053379 | orchestrator | 18:40:54.053 STDOUT terraform:  + description = "management security group" 2025-08-29 18:40:54.053433 | orchestrator | 18:40:54.053 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.053490 | orchestrator | 18:40:54.053 STDOUT terraform:  + name = "testbed-management" 2025-08-29 18:40:54.053536 | orchestrator | 18:40:54.053 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.053572 | orchestrator | 18:40:54.053 STDOUT terraform:  + stateful = (known after apply) 2025-08-29 18:40:54.053607 | orchestrator | 18:40:54.053 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 18:40:54.053651 | orchestrator | 18:40:54.053 STDOUT terraform:  } 2025-08-29 18:40:54.053760 | orchestrator | 18:40:54.053 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-08-29 18:40:54.053833 | orchestrator | 18:40:54.053 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-08-29 18:40:54.053903 | orchestrator | 18:40:54.053 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 18:40:54.053955 | orchestrator | 18:40:54.053 STDOUT terraform:  + description = "node security group" 2025-08-29 18:40:54.053992 | orchestrator | 18:40:54.053 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.054043 | orchestrator | 18:40:54.054 STDOUT terraform:  + name = "testbed-node" 2025-08-29 18:40:54.054081 | orchestrator | 18:40:54.054 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.054128 | orchestrator | 18:40:54.054 STDOUT terraform:  + stateful = (known after apply) 2025-08-29 18:40:54.054163 | orchestrator | 18:40:54.054 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 18:40:54.054197 | orchestrator | 18:40:54.054 STDOUT terraform:  } 2025-08-29 18:40:54.054247 | orchestrator | 18:40:54.054 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-08-29 18:40:54.054314 | orchestrator | 18:40:54.054 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-08-29 18:40:54.054369 | orchestrator | 18:40:54.054 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 18:40:54.054406 | orchestrator | 18:40:54.054 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-08-29 18:40:54.054447 | orchestrator | 18:40:54.054 STDOUT terraform:  + dns_nameservers = [ 2025-08-29 18:40:54.054471 | orchestrator | 18:40:54.054 STDOUT terraform:  + "8.8.8.8", 2025-08-29 18:40:54.054508 | orchestrator | 18:40:54.054 STDOUT terraform:  + "9.9.9.9", 2025-08-29 18:40:54.054538 | orchestrator | 18:40:54.054 STDOUT terraform:  ] 2025-08-29 18:40:54.054565 | orchestrator | 18:40:54.054 STDOUT terraform:  + enable_dhcp = true 2025-08-29 18:40:54.054616 | orchestrator | 18:40:54.054 STDOUT terraform:  + gateway_ip = (known after apply) 2025-08-29 18:40:54.054680 | orchestrator | 18:40:54.054 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.054708 | orchestrator | 18:40:54.054 STDOUT terraform:  + ip_version = 4 2025-08-29 18:40:54.054758 | orchestrator | 18:40:54.054 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-08-29 18:40:54.054799 | orchestrator | 18:40:54.054 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-08-29 18:40:54.054856 | orchestrator | 18:40:54.054 STDOUT terraform:  + name = "subnet-testbed-management" 2025-08-29 18:40:54.054908 | orchestrator | 18:40:54.054 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 18:40:54.054936 | orchestrator | 18:40:54.054 STDOUT terraform:  + no_gateway = false 2025-08-29 18:40:54.054986 | orchestrator | 18:40:54.054 STDOUT terraform:  + region = (known after apply) 2025-08-29 18:40:54.055022 | orchestrator | 18:40:54.054 STDOUT terraform:  + service_types = (known after apply) 2025-08-29 18:40:54.055075 | orchestrator | 18:40:54.055 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 18:40:54.055104 | orchestrator | 18:40:54.055 STDOUT terraform:  + allocation_pool { 2025-08-29 18:40:54.055148 | orchestrator | 18:40:54.055 STDOUT terraform:  + end = "192.168.31.250" 2025-08-29 18:40:54.055180 | orchestrator | 18:40:54.055 STDOUT terraform:  + start = "192.168.31.200" 2025-08-29 18:40:54.055216 | orchestrator | 18:40:54.055 STDOUT terraform:  } 2025-08-29 18:40:54.055238 | orchestrator | 18:40:54.055 STDOUT terraform:  } 2025-08-29 18:40:54.055268 | orchestrator | 18:40:54.055 STDOUT terraform:  # terraform_data.image will be created 2025-08-29 18:40:54.055313 | orchestrator | 18:40:54.055 STDOUT terraform:  + resource "terraform_data" "image" { 2025-08-29 18:40:54.055343 | orchestrator | 18:40:54.055 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.055383 | orchestrator | 18:40:54.055 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-08-29 18:40:54.055413 | orchestrator | 18:40:54.055 STDOUT terraform:  + output = (known after apply) 2025-08-29 18:40:54.055440 | orchestrator | 18:40:54.055 STDOUT terraform:  } 2025-08-29 18:40:54.055480 | orchestrator | 18:40:54.055 STDOUT terraform:  # terraform_data.image_node will be created 2025-08-29 18:40:54.055524 | orchestrator | 18:40:54.055 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-08-29 18:40:54.055561 | orchestrator | 18:40:54.055 STDOUT terraform:  + id = (known after apply) 2025-08-29 18:40:54.055589 | orchestrator | 18:40:54.055 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-08-29 18:40:54.055633 | orchestrator | 18:40:54.055 STDOUT terraform:  + output = (known after apply) 2025-08-29 18:40:54.055689 | orchestrator | 18:40:54.055 STDOUT terraform:  } 2025-08-29 18:40:54.055737 | orchestrator | 18:40:54.055 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-08-29 18:40:54.055759 | orchestrator | 18:40:54.055 STDOUT terraform: Changes to Outputs: 2025-08-29 18:40:54.055808 | orchestrator | 18:40:54.055 STDOUT terraform:  + manager_address = (sensitive value) 2025-08-29 18:40:54.055841 | orchestrator | 18:40:54.055 STDOUT terraform:  + private_key = (sensitive value) 2025-08-29 18:40:54.100576 | orchestrator | 18:40:54.098 STDOUT terraform: terraform_data.image: Creating... 2025-08-29 18:40:54.100626 | orchestrator | 18:40:54.099 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=c752dd86-0b40-f51b-fa36-b4835fe49a38] 2025-08-29 18:40:54.227719 | orchestrator | 18:40:54.227 STDOUT terraform: terraform_data.image_node: Creating... 2025-08-29 18:40:54.227822 | orchestrator | 18:40:54.227 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=acf6f930-18d3-bfca-869f-79765ea2a9a2] 2025-08-29 18:40:54.236236 | orchestrator | 18:40:54.236 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-08-29 18:40:54.243650 | orchestrator | 18:40:54.243 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-08-29 18:40:54.243836 | orchestrator | 18:40:54.243 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-08-29 18:40:54.255291 | orchestrator | 18:40:54.255 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-08-29 18:40:54.256242 | orchestrator | 18:40:54.256 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-08-29 18:40:54.265433 | orchestrator | 18:40:54.265 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-08-29 18:40:54.267438 | orchestrator | 18:40:54.267 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-08-29 18:40:54.273496 | orchestrator | 18:40:54.273 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-08-29 18:40:54.273603 | orchestrator | 18:40:54.273 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-08-29 18:40:54.275894 | orchestrator | 18:40:54.275 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-08-29 18:40:54.774364 | orchestrator | 18:40:54.774 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-08-29 18:40:54.778079 | orchestrator | 18:40:54.777 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-08-29 18:40:54.809877 | orchestrator | 18:40:54.809 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-08-29 18:40:54.814973 | orchestrator | 18:40:54.814 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-08-29 18:40:55.406420 | orchestrator | 18:40:55.406 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=7d4ee303-ad83-4727-a20f-d496f7a0eb7b] 2025-08-29 18:40:55.407789 | orchestrator | 18:40:55.407 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-08-29 18:40:55.460354 | orchestrator | 18:40:55.460 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-08-29 18:40:55.469391 | orchestrator | 18:40:55.469 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-08-29 18:40:57.937575 | orchestrator | 18:40:57.937 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=0a4c4485-d2ea-4599-9435-e606068873fe] 2025-08-29 18:40:57.952341 | orchestrator | 18:40:57.952 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=a237b6df-fa80-49e2-8f79-019305f27c2d] 2025-08-29 18:40:57.954966 | orchestrator | 18:40:57.954 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-08-29 18:40:57.959486 | orchestrator | 18:40:57.959 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=1605cea1e2d0dd5b154f42ee22859b8c603da703] 2025-08-29 18:40:57.963444 | orchestrator | 18:40:57.963 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=2e78d78f-49e6-4011-b2d5-d2a9d6a8ed1c] 2025-08-29 18:40:57.965369 | orchestrator | 18:40:57.965 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-08-29 18:40:57.966501 | orchestrator | 18:40:57.966 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-08-29 18:40:57.972551 | orchestrator | 18:40:57.972 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-08-29 18:40:57.978954 | orchestrator | 18:40:57.978 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=1b2605dd6596aa3b2b3919e152e4b4876eff28ac] 2025-08-29 18:40:57.983457 | orchestrator | 18:40:57.983 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-08-29 18:40:57.993138 | orchestrator | 18:40:57.992 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=e5bba166-d17c-451d-864c-9f74c60a90a3] 2025-08-29 18:40:57.995905 | orchestrator | 18:40:57.995 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=4acdbd50-4373-4301-8b9f-e7658d09fe80] 2025-08-29 18:40:57.997964 | orchestrator | 18:40:57.997 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-08-29 18:40:58.001264 | orchestrator | 18:40:58.001 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-08-29 18:40:58.162726 | orchestrator | 18:40:58.162 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=5f9ac8f7-ded0-451e-9523-765e677fc5e6] 2025-08-29 18:40:58.171426 | orchestrator | 18:40:58.171 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-08-29 18:40:58.178953 | orchestrator | 18:40:58.178 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=53b41fa5-6534-4646-b4f8-3662ac98ea03] 2025-08-29 18:40:58.182198 | orchestrator | 18:40:58.181 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=b185a2fd-fb6c-4818-b874-6a265721bd32] 2025-08-29 18:40:58.190535 | orchestrator | 18:40:58.190 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-08-29 18:40:58.195227 | orchestrator | 18:40:58.195 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=21104e56-4cdf-49d9-91fd-13aff314e467] 2025-08-29 18:40:58.832057 | orchestrator | 18:40:58.831 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=2c124e89-2877-4bde-9f84-8fd574e1ad31] 2025-08-29 18:40:58.935918 | orchestrator | 18:40:58.935 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=c4d65d1e-0e23-42fb-a8c0-1eb674356600] 2025-08-29 18:40:58.946810 | orchestrator | 18:40:58.946 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-08-29 18:41:01.368821 | orchestrator | 18:41:01.368 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=b3dc65af-678c-4ad0-95f2-4a490e1a0b3a] 2025-08-29 18:41:01.423543 | orchestrator | 18:41:01.423 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=d5dc2865-d12f-434a-a66f-3507e82ce759] 2025-08-29 18:41:01.438934 | orchestrator | 18:41:01.438 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd] 2025-08-29 18:41:01.529564 | orchestrator | 18:41:01.529 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=593879a8-1213-4abc-9241-c8a1c7d52cf9] 2025-08-29 18:41:01.577781 | orchestrator | 18:41:01.577 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=4c3606b3-c531-44c5-857d-1ee4d13c4585] 2025-08-29 18:41:01.613572 | orchestrator | 18:41:01.613 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=02ff1d4e-2410-4b7f-a7fd-7ee241f95920] 2025-08-29 18:41:02.112654 | orchestrator | 18:41:02.112 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 3s [id=aff695ca-6ea9-4a83-9070-d800f807f3c7] 2025-08-29 18:41:02.118178 | orchestrator | 18:41:02.117 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-08-29 18:41:02.119573 | orchestrator | 18:41:02.119 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-08-29 18:41:02.120551 | orchestrator | 18:41:02.120 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-08-29 18:41:02.341554 | orchestrator | 18:41:02.341 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=a9f65ca4-e4d2-4684-9bda-cb2a6e6979c1] 2025-08-29 18:41:02.356431 | orchestrator | 18:41:02.356 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-08-29 18:41:02.356475 | orchestrator | 18:41:02.356 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-08-29 18:41:02.366545 | orchestrator | 18:41:02.364 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-08-29 18:41:02.366582 | orchestrator | 18:41:02.364 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-08-29 18:41:02.366587 | orchestrator | 18:41:02.365 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-08-29 18:41:02.368786 | orchestrator | 18:41:02.368 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=c138ee71-fce1-4037-8d44-bcc9b5229449] 2025-08-29 18:41:02.370395 | orchestrator | 18:41:02.370 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-08-29 18:41:02.372046 | orchestrator | 18:41:02.371 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-08-29 18:41:02.372202 | orchestrator | 18:41:02.372 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-08-29 18:41:02.372901 | orchestrator | 18:41:02.372 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-08-29 18:41:02.511510 | orchestrator | 18:41:02.511 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=ed1890df-02f7-4973-a102-91be0aa16932] 2025-08-29 18:41:02.515941 | orchestrator | 18:41:02.515 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-08-29 18:41:02.674401 | orchestrator | 18:41:02.674 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=e9b30068-46b4-4b7c-abd4-e7200530bb78] 2025-08-29 18:41:02.689312 | orchestrator | 18:41:02.689 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-08-29 18:41:02.883801 | orchestrator | 18:41:02.883 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=f3920724-0b49-4710-b9d3-7cd7a8bc69f5] 2025-08-29 18:41:02.892738 | orchestrator | 18:41:02.892 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-08-29 18:41:02.938841 | orchestrator | 18:41:02.938 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=01434bd8-1102-4ed2-97f9-ed4ee1374f1f] 2025-08-29 18:41:02.955870 | orchestrator | 18:41:02.955 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-08-29 18:41:03.063972 | orchestrator | 18:41:03.063 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=8f020b1c-96b6-4391-b406-3982d1f6ed0b] 2025-08-29 18:41:03.079694 | orchestrator | 18:41:03.079 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-08-29 18:41:03.115455 | orchestrator | 18:41:03.115 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=c3480f6e-acec-476c-ac9e-f9dc77997566] 2025-08-29 18:41:03.131173 | orchestrator | 18:41:03.130 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-08-29 18:41:03.247960 | orchestrator | 18:41:03.247 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=84ee5e94-8321-44ec-b97d-201d761e5dd3] 2025-08-29 18:41:03.260282 | orchestrator | 18:41:03.260 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-08-29 18:41:03.398123 | orchestrator | 18:41:03.397 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 0s [id=09fe87a7-e59c-4b58-873c-8abc0efc48b0] 2025-08-29 18:41:03.434044 | orchestrator | 18:41:03.433 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=33fba4ec-b40d-47d6-a83f-cf1ed8ebefdd] 2025-08-29 18:41:03.617077 | orchestrator | 18:41:03.616 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 2s [id=d4eeffcd-9fa3-452f-8729-5efb6664b073] 2025-08-29 18:41:03.651428 | orchestrator | 18:41:03.651 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=92dcb106-04ea-4dd3-951e-a305cfa5e7ba] 2025-08-29 18:41:03.676436 | orchestrator | 18:41:03.676 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=09c9e607-1981-4f82-8bb0-c1e720cd882f] 2025-08-29 18:41:03.855394 | orchestrator | 18:41:03.855 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=266eecc6-8b0e-46dd-82c9-c7b80c4c0adf] 2025-08-29 18:41:03.899883 | orchestrator | 18:41:03.899 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=865e63cb-148b-4b83-bc32-732df1551802] 2025-08-29 18:41:04.111518 | orchestrator | 18:41:04.111 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=8cac7f8e-4529-4de8-af49-8618a19f1cf9] 2025-08-29 18:41:04.136040 | orchestrator | 18:41:04.135 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=3dbac66b-b9e0-484b-b8c2-01a3068d3285] 2025-08-29 18:41:04.907535 | orchestrator | 18:41:04.907 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=11bc252f-bc70-4e9d-b9f7-759af440a879] 2025-08-29 18:41:04.928800 | orchestrator | 18:41:04.928 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-08-29 18:41:04.938363 | orchestrator | 18:41:04.938 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-08-29 18:41:04.939339 | orchestrator | 18:41:04.939 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-08-29 18:41:04.948201 | orchestrator | 18:41:04.948 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-08-29 18:41:04.950485 | orchestrator | 18:41:04.950 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-08-29 18:41:04.956426 | orchestrator | 18:41:04.956 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-08-29 18:41:04.960267 | orchestrator | 18:41:04.960 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-08-29 18:41:07.021854 | orchestrator | 18:41:07.021 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=b75c37d3-707a-4b9e-9b7d-e3d057ce89da] 2025-08-29 18:41:07.037955 | orchestrator | 18:41:07.037 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-08-29 18:41:07.038070 | orchestrator | 18:41:07.037 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-08-29 18:41:07.039432 | orchestrator | 18:41:07.039 STDOUT terraform: local_file.inventory: Creating... 2025-08-29 18:41:07.045200 | orchestrator | 18:41:07.045 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=312e796e8573c58c891855f2d6a4779be587c041] 2025-08-29 18:41:07.046052 | orchestrator | 18:41:07.045 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=ba91b239cdaecc52c51565c9c8316e94425fa7ff] 2025-08-29 18:41:07.884583 | orchestrator | 18:41:07.884 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=b75c37d3-707a-4b9e-9b7d-e3d057ce89da] 2025-08-29 18:41:14.940088 | orchestrator | 18:41:14.939 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-08-29 18:41:14.941214 | orchestrator | 18:41:14.941 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-08-29 18:41:14.951495 | orchestrator | 18:41:14.951 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-08-29 18:41:14.952660 | orchestrator | 18:41:14.952 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-08-29 18:41:14.958943 | orchestrator | 18:41:14.958 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-08-29 18:41:14.965265 | orchestrator | 18:41:14.965 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-08-29 18:41:24.942788 | orchestrator | 18:41:24.942 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-08-29 18:41:24.942972 | orchestrator | 18:41:24.942 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-08-29 18:41:24.952109 | orchestrator | 18:41:24.952 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-08-29 18:41:24.953514 | orchestrator | 18:41:24.953 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-08-29 18:41:24.959898 | orchestrator | 18:41:24.959 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-08-29 18:41:24.966257 | orchestrator | 18:41:24.966 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-08-29 18:41:25.526751 | orchestrator | 18:41:25.526 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 21s [id=c0753e05-a760-49f3-b3c2-d07bf4e8b8f4] 2025-08-29 18:41:26.960795 | orchestrator | 18:41:26.960 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 22s [id=4a79f271-d3bd-467f-9ec9-3f26f2f82c95] 2025-08-29 18:41:34.943403 | orchestrator | 18:41:34.943 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-08-29 18:41:34.953648 | orchestrator | 18:41:34.953 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-08-29 18:41:34.954825 | orchestrator | 18:41:34.954 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-08-29 18:41:34.961104 | orchestrator | 18:41:34.960 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-08-29 18:41:35.761727 | orchestrator | 18:41:35.761 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=e921644d-db03-4192-b278-19a1fd96a94a] 2025-08-29 18:41:35.797548 | orchestrator | 18:41:35.797 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=532fdde7-8bcf-4a24-b52b-a4bf9373d6ce] 2025-08-29 18:41:35.841093 | orchestrator | 18:41:35.840 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=526a28fa-0a9c-49ca-917b-572503b6d936] 2025-08-29 18:41:44.943921 | orchestrator | 18:41:44.943 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2025-08-29 18:41:46.393353 | orchestrator | 18:41:46.392 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 41s [id=86ac85c4-578a-4442-b5a4-dadd2bbd45b1] 2025-08-29 18:41:46.424440 | orchestrator | 18:41:46.424 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-08-29 18:41:46.429747 | orchestrator | 18:41:46.429 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=8612333623785237731] 2025-08-29 18:41:46.445376 | orchestrator | 18:41:46.445 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-08-29 18:41:46.449492 | orchestrator | 18:41:46.449 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-08-29 18:41:46.449631 | orchestrator | 18:41:46.449 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-08-29 18:41:46.453226 | orchestrator | 18:41:46.453 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-08-29 18:41:46.458399 | orchestrator | 18:41:46.458 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-08-29 18:41:46.462375 | orchestrator | 18:41:46.462 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-08-29 18:41:46.462590 | orchestrator | 18:41:46.462 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-08-29 18:41:46.470961 | orchestrator | 18:41:46.470 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-08-29 18:41:46.475998 | orchestrator | 18:41:46.475 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-08-29 18:41:46.476933 | orchestrator | 18:41:46.476 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-08-29 18:41:49.891456 | orchestrator | 18:41:49.891 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=526a28fa-0a9c-49ca-917b-572503b6d936/4acdbd50-4373-4301-8b9f-e7658d09fe80] 2025-08-29 18:41:49.917301 | orchestrator | 18:41:49.916 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=86ac85c4-578a-4442-b5a4-dadd2bbd45b1/a237b6df-fa80-49e2-8f79-019305f27c2d] 2025-08-29 18:41:49.957280 | orchestrator | 18:41:49.956 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=526a28fa-0a9c-49ca-917b-572503b6d936/53b41fa5-6534-4646-b4f8-3662ac98ea03] 2025-08-29 18:41:50.092147 | orchestrator | 18:41:50.091 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=86ac85c4-578a-4442-b5a4-dadd2bbd45b1/b185a2fd-fb6c-4818-b874-6a265721bd32] 2025-08-29 18:41:50.097589 | orchestrator | 18:41:50.097 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=532fdde7-8bcf-4a24-b52b-a4bf9373d6ce/e5bba166-d17c-451d-864c-9f74c60a90a3] 2025-08-29 18:41:50.186614 | orchestrator | 18:41:50.186 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=532fdde7-8bcf-4a24-b52b-a4bf9373d6ce/21104e56-4cdf-49d9-91fd-13aff314e467] 2025-08-29 18:41:51.484655 | orchestrator | 18:41:51.484 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=86ac85c4-578a-4442-b5a4-dadd2bbd45b1/5f9ac8f7-ded0-451e-9523-765e677fc5e6] 2025-08-29 18:41:56.184264 | orchestrator | 18:41:56.183 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=526a28fa-0a9c-49ca-917b-572503b6d936/2e78d78f-49e6-4011-b2d5-d2a9d6a8ed1c] 2025-08-29 18:41:56.309667 | orchestrator | 18:41:56.309 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=532fdde7-8bcf-4a24-b52b-a4bf9373d6ce/0a4c4485-d2ea-4599-9435-e606068873fe] 2025-08-29 18:41:56.484693 | orchestrator | 18:41:56.484 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-08-29 18:42:06.484877 | orchestrator | 18:42:06.484 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-08-29 18:42:07.016018 | orchestrator | 18:42:07.015 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=ee2e9230-1bdd-477e-85d8-eadffc75e983] 2025-08-29 18:42:07.043894 | orchestrator | 18:42:07.043 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-08-29 18:42:07.044007 | orchestrator | 18:42:07.043 STDOUT terraform: Outputs: 2025-08-29 18:42:07.044052 | orchestrator | 18:42:07.043 STDOUT terraform: manager_address = 2025-08-29 18:42:07.044098 | orchestrator | 18:42:07.043 STDOUT terraform: private_key = 2025-08-29 18:42:07.121165 | orchestrator | ok: Runtime: 0:01:19.135761 2025-08-29 18:42:07.144301 | 2025-08-29 18:42:07.144423 | TASK [Create infrastructure (stable)] 2025-08-29 18:42:07.677658 | orchestrator | skipping: Conditional result was False 2025-08-29 18:42:07.696834 | 2025-08-29 18:42:07.696994 | TASK [Fetch manager address] 2025-08-29 18:42:08.131752 | orchestrator | ok 2025-08-29 18:42:08.142557 | 2025-08-29 18:42:08.142713 | TASK [Set manager_host address] 2025-08-29 18:42:08.226999 | orchestrator | ok 2025-08-29 18:42:08.239601 | 2025-08-29 18:42:08.239778 | LOOP [Update ansible collections] 2025-08-29 18:42:09.157851 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-08-29 18:42:09.158216 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-08-29 18:42:09.158273 | orchestrator | Starting galaxy collection install process 2025-08-29 18:42:09.158309 | orchestrator | Process install dependency map 2025-08-29 18:42:09.158340 | orchestrator | Starting collection install process 2025-08-29 18:42:09.158368 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons' 2025-08-29 18:42:09.158399 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons 2025-08-29 18:42:09.158435 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-08-29 18:42:09.158498 | orchestrator | ok: Item: commons Runtime: 0:00:00.608321 2025-08-29 18:42:10.028902 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-08-29 18:42:10.029118 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-08-29 18:42:10.029204 | orchestrator | Starting galaxy collection install process 2025-08-29 18:42:10.029267 | orchestrator | Process install dependency map 2025-08-29 18:42:10.029325 | orchestrator | Starting collection install process 2025-08-29 18:42:10.029379 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services' 2025-08-29 18:42:10.029435 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services 2025-08-29 18:42:10.029487 | orchestrator | osism.services:999.0.0 was installed successfully 2025-08-29 18:42:10.029562 | orchestrator | ok: Item: services Runtime: 0:00:00.618151 2025-08-29 18:42:10.048819 | 2025-08-29 18:42:10.049034 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-08-29 18:42:22.668557 | orchestrator | ok 2025-08-29 18:42:22.680113 | 2025-08-29 18:42:22.680234 | TASK [Wait a little longer for the manager so that everything is ready] 2025-08-29 18:43:22.736077 | orchestrator | ok 2025-08-29 18:43:22.746154 | 2025-08-29 18:43:22.746290 | TASK [Fetch manager ssh hostkey] 2025-08-29 18:43:24.326245 | orchestrator | Output suppressed because no_log was given 2025-08-29 18:43:24.336159 | 2025-08-29 18:43:24.336342 | TASK [Get ssh keypair from terraform environment] 2025-08-29 18:43:24.881029 | orchestrator | ok: Runtime: 0:00:00.005258 2025-08-29 18:43:24.896876 | 2025-08-29 18:43:24.897041 | TASK [Point out that the following task takes some time and does not give any output] 2025-08-29 18:43:24.944414 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-08-29 18:43:24.953709 | 2025-08-29 18:43:24.953833 | TASK [Run manager part 0] 2025-08-29 18:43:25.823522 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-08-29 18:43:25.868872 | orchestrator | 2025-08-29 18:43:25.868957 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-08-29 18:43:25.868972 | orchestrator | 2025-08-29 18:43:25.868997 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-08-29 18:43:27.602968 | orchestrator | ok: [testbed-manager] 2025-08-29 18:43:27.603638 | orchestrator | 2025-08-29 18:43:27.603682 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-08-29 18:43:27.603694 | orchestrator | 2025-08-29 18:43:27.603705 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 18:43:29.436168 | orchestrator | ok: [testbed-manager] 2025-08-29 18:43:29.436213 | orchestrator | 2025-08-29 18:43:29.436220 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-08-29 18:43:30.083085 | orchestrator | ok: [testbed-manager] 2025-08-29 18:43:30.083158 | orchestrator | 2025-08-29 18:43:30.083176 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-08-29 18:43:30.123439 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:43:30.123526 | orchestrator | 2025-08-29 18:43:30.123541 | orchestrator | TASK [Update package cache] **************************************************** 2025-08-29 18:43:30.160697 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:43:30.160768 | orchestrator | 2025-08-29 18:43:30.160787 | orchestrator | TASK [Install required packages] *********************************************** 2025-08-29 18:43:30.201252 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:43:30.201302 | orchestrator | 2025-08-29 18:43:30.201310 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-08-29 18:43:30.234144 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:43:30.234187 | orchestrator | 2025-08-29 18:43:30.234193 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-08-29 18:43:30.260674 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:43:30.260727 | orchestrator | 2025-08-29 18:43:30.260738 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-08-29 18:43:30.294412 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:43:30.294481 | orchestrator | 2025-08-29 18:43:30.294491 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-08-29 18:43:30.326762 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:43:30.326816 | orchestrator | 2025-08-29 18:43:30.326826 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-08-29 18:43:31.050917 | orchestrator | changed: [testbed-manager] 2025-08-29 18:43:31.050962 | orchestrator | 2025-08-29 18:43:31.050969 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-08-29 18:46:03.142483 | orchestrator | changed: [testbed-manager] 2025-08-29 18:46:03.142554 | orchestrator | 2025-08-29 18:46:03.142571 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-08-29 18:47:20.059019 | orchestrator | changed: [testbed-manager] 2025-08-29 18:47:20.059088 | orchestrator | 2025-08-29 18:47:20.059103 | orchestrator | TASK [Install required packages] *********************************************** 2025-08-29 18:47:40.339272 | orchestrator | changed: [testbed-manager] 2025-08-29 18:47:40.339369 | orchestrator | 2025-08-29 18:47:40.339388 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-08-29 18:47:49.200652 | orchestrator | changed: [testbed-manager] 2025-08-29 18:47:49.200694 | orchestrator | 2025-08-29 18:47:49.200702 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-08-29 18:47:49.247756 | orchestrator | ok: [testbed-manager] 2025-08-29 18:47:49.247834 | orchestrator | 2025-08-29 18:47:49.247849 | orchestrator | TASK [Get current user] ******************************************************** 2025-08-29 18:47:50.034357 | orchestrator | ok: [testbed-manager] 2025-08-29 18:47:50.034448 | orchestrator | 2025-08-29 18:47:50.034465 | orchestrator | TASK [Create venv directory] *************************************************** 2025-08-29 18:47:50.744658 | orchestrator | changed: [testbed-manager] 2025-08-29 18:47:50.744762 | orchestrator | 2025-08-29 18:47:50.744780 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-08-29 18:47:57.175955 | orchestrator | changed: [testbed-manager] 2025-08-29 18:47:57.176043 | orchestrator | 2025-08-29 18:47:57.176082 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-08-29 18:48:03.722582 | orchestrator | changed: [testbed-manager] 2025-08-29 18:48:03.722669 | orchestrator | 2025-08-29 18:48:03.722711 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-08-29 18:48:06.671237 | orchestrator | changed: [testbed-manager] 2025-08-29 18:48:06.671323 | orchestrator | 2025-08-29 18:48:06.671340 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-08-29 18:48:08.859600 | orchestrator | changed: [testbed-manager] 2025-08-29 18:48:08.859693 | orchestrator | 2025-08-29 18:48:08.859710 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-08-29 18:48:09.987225 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-08-29 18:48:09.987351 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-08-29 18:48:09.987368 | orchestrator | 2025-08-29 18:48:09.987380 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-08-29 18:48:10.027969 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-08-29 18:48:10.028010 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-08-29 18:48:10.028016 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-08-29 18:48:10.028021 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-08-29 18:48:13.113225 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-08-29 18:48:13.113355 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-08-29 18:48:13.113371 | orchestrator | 2025-08-29 18:48:13.113384 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-08-29 18:48:13.679719 | orchestrator | changed: [testbed-manager] 2025-08-29 18:48:13.679796 | orchestrator | 2025-08-29 18:48:13.679811 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-08-29 18:49:33.119036 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-08-29 18:49:33.119195 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-08-29 18:49:33.119218 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-08-29 18:49:33.119231 | orchestrator | 2025-08-29 18:49:33.119245 | orchestrator | TASK [Install local collections] *********************************************** 2025-08-29 18:49:35.457997 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-08-29 18:49:35.458102 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-08-29 18:49:35.458116 | orchestrator | 2025-08-29 18:49:35.458128 | orchestrator | PLAY [Create operator user] **************************************************** 2025-08-29 18:49:35.458140 | orchestrator | 2025-08-29 18:49:35.458151 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 18:49:36.921297 | orchestrator | ok: [testbed-manager] 2025-08-29 18:49:36.921379 | orchestrator | 2025-08-29 18:49:36.921397 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-08-29 18:49:36.968916 | orchestrator | ok: [testbed-manager] 2025-08-29 18:49:36.968966 | orchestrator | 2025-08-29 18:49:36.968975 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-08-29 18:49:37.028800 | orchestrator | ok: [testbed-manager] 2025-08-29 18:49:37.028884 | orchestrator | 2025-08-29 18:49:37.028899 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-08-29 18:49:37.850593 | orchestrator | changed: [testbed-manager] 2025-08-29 18:49:37.850673 | orchestrator | 2025-08-29 18:49:37.850689 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-08-29 18:49:38.563117 | orchestrator | changed: [testbed-manager] 2025-08-29 18:49:38.563204 | orchestrator | 2025-08-29 18:49:38.563221 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-08-29 18:49:39.955852 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-08-29 18:49:39.955931 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-08-29 18:49:39.955945 | orchestrator | 2025-08-29 18:49:39.955967 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-08-29 18:49:41.307145 | orchestrator | changed: [testbed-manager] 2025-08-29 18:49:41.307247 | orchestrator | 2025-08-29 18:49:41.307264 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-08-29 18:49:43.122892 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 18:49:43.122980 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-08-29 18:49:43.122995 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-08-29 18:49:43.123008 | orchestrator | 2025-08-29 18:49:43.123021 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-08-29 18:49:43.177049 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:49:43.177109 | orchestrator | 2025-08-29 18:49:43.177122 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-08-29 18:49:43.741808 | orchestrator | changed: [testbed-manager] 2025-08-29 18:49:43.741889 | orchestrator | 2025-08-29 18:49:43.741907 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-08-29 18:49:43.810465 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:49:43.810531 | orchestrator | 2025-08-29 18:49:43.810545 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-08-29 18:49:44.638484 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 18:49:44.639263 | orchestrator | changed: [testbed-manager] 2025-08-29 18:49:44.639305 | orchestrator | 2025-08-29 18:49:44.639329 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-08-29 18:49:44.678066 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:49:44.678161 | orchestrator | 2025-08-29 18:49:44.678183 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-08-29 18:49:44.715619 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:49:44.715692 | orchestrator | 2025-08-29 18:49:44.715709 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-08-29 18:49:44.749131 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:49:44.749206 | orchestrator | 2025-08-29 18:49:44.749220 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-08-29 18:49:44.791645 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:49:44.791728 | orchestrator | 2025-08-29 18:49:44.791747 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-08-29 18:49:45.495872 | orchestrator | ok: [testbed-manager] 2025-08-29 18:49:45.495916 | orchestrator | 2025-08-29 18:49:45.495921 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-08-29 18:49:45.495926 | orchestrator | 2025-08-29 18:49:45.495931 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 18:49:46.934925 | orchestrator | ok: [testbed-manager] 2025-08-29 18:49:46.934974 | orchestrator | 2025-08-29 18:49:46.934980 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-08-29 18:49:47.898527 | orchestrator | changed: [testbed-manager] 2025-08-29 18:49:47.898567 | orchestrator | 2025-08-29 18:49:47.898573 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 18:49:47.898578 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-08-29 18:49:47.898583 | orchestrator | 2025-08-29 18:49:48.214749 | orchestrator | ok: Runtime: 0:06:22.752637 2025-08-29 18:49:48.232175 | 2025-08-29 18:49:48.232331 | TASK [Point out that the log in on the manager is now possible] 2025-08-29 18:49:48.281259 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-08-29 18:49:48.291293 | 2025-08-29 18:49:48.291417 | TASK [Point out that the following task takes some time and does not give any output] 2025-08-29 18:49:48.336046 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-08-29 18:49:48.344469 | 2025-08-29 18:49:48.344628 | TASK [Run manager part 1 + 2] 2025-08-29 18:49:49.162490 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-08-29 18:49:49.214939 | orchestrator | 2025-08-29 18:49:49.215020 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-08-29 18:49:49.215038 | orchestrator | 2025-08-29 18:49:49.215090 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 18:49:52.196177 | orchestrator | ok: [testbed-manager] 2025-08-29 18:49:52.196384 | orchestrator | 2025-08-29 18:49:52.196467 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-08-29 18:49:52.232886 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:49:52.232956 | orchestrator | 2025-08-29 18:49:52.232974 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-08-29 18:49:52.273857 | orchestrator | ok: [testbed-manager] 2025-08-29 18:49:52.273924 | orchestrator | 2025-08-29 18:49:52.273940 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-08-29 18:49:52.313537 | orchestrator | ok: [testbed-manager] 2025-08-29 18:49:52.313600 | orchestrator | 2025-08-29 18:49:52.313618 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-08-29 18:49:52.377074 | orchestrator | ok: [testbed-manager] 2025-08-29 18:49:52.377154 | orchestrator | 2025-08-29 18:49:52.377172 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-08-29 18:49:52.440899 | orchestrator | ok: [testbed-manager] 2025-08-29 18:49:52.440977 | orchestrator | 2025-08-29 18:49:52.440994 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-08-29 18:49:52.481114 | orchestrator | included: /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-08-29 18:49:52.481181 | orchestrator | 2025-08-29 18:49:52.481195 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-08-29 18:49:53.196216 | orchestrator | ok: [testbed-manager] 2025-08-29 18:49:53.196301 | orchestrator | 2025-08-29 18:49:53.196319 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-08-29 18:49:53.237561 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:49:53.237634 | orchestrator | 2025-08-29 18:49:53.237649 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-08-29 18:49:54.646155 | orchestrator | changed: [testbed-manager] 2025-08-29 18:49:54.646250 | orchestrator | 2025-08-29 18:49:54.646271 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-08-29 18:49:55.229320 | orchestrator | ok: [testbed-manager] 2025-08-29 18:49:55.229424 | orchestrator | 2025-08-29 18:49:55.229444 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-08-29 18:49:56.408363 | orchestrator | changed: [testbed-manager] 2025-08-29 18:49:56.408455 | orchestrator | 2025-08-29 18:49:56.408475 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-08-29 18:50:12.313758 | orchestrator | changed: [testbed-manager] 2025-08-29 18:50:12.313930 | orchestrator | 2025-08-29 18:50:12.313948 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-08-29 18:50:12.925399 | orchestrator | ok: [testbed-manager] 2025-08-29 18:50:12.925483 | orchestrator | 2025-08-29 18:50:12.925502 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-08-29 18:50:12.976877 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:50:12.976923 | orchestrator | 2025-08-29 18:50:12.976935 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-08-29 18:50:13.866479 | orchestrator | changed: [testbed-manager] 2025-08-29 18:50:13.866535 | orchestrator | 2025-08-29 18:50:13.866544 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-08-29 18:50:14.723244 | orchestrator | changed: [testbed-manager] 2025-08-29 18:50:14.723322 | orchestrator | 2025-08-29 18:50:14.723338 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-08-29 18:50:15.316688 | orchestrator | changed: [testbed-manager] 2025-08-29 18:50:15.316750 | orchestrator | 2025-08-29 18:50:15.316765 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-08-29 18:50:15.354755 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-08-29 18:50:15.354857 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-08-29 18:50:15.354872 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-08-29 18:50:15.354884 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-08-29 18:50:17.280928 | orchestrator | changed: [testbed-manager] 2025-08-29 18:50:17.280993 | orchestrator | 2025-08-29 18:50:17.281008 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-08-29 18:50:26.286082 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-08-29 18:50:26.286129 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-08-29 18:50:26.286139 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-08-29 18:50:26.286146 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-08-29 18:50:26.286157 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-08-29 18:50:26.286164 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-08-29 18:50:26.286170 | orchestrator | 2025-08-29 18:50:26.286178 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-08-29 18:50:27.332786 | orchestrator | changed: [testbed-manager] 2025-08-29 18:50:27.332866 | orchestrator | 2025-08-29 18:50:27.332883 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-08-29 18:50:27.373990 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:50:27.374108 | orchestrator | 2025-08-29 18:50:27.374124 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-08-29 18:50:30.480267 | orchestrator | changed: [testbed-manager] 2025-08-29 18:50:30.480372 | orchestrator | 2025-08-29 18:50:30.480389 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-08-29 18:50:30.516287 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:50:30.516367 | orchestrator | 2025-08-29 18:50:30.516381 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-08-29 18:52:10.830063 | orchestrator | changed: [testbed-manager] 2025-08-29 18:52:10.830157 | orchestrator | 2025-08-29 18:52:10.830176 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-08-29 18:52:12.074301 | orchestrator | ok: [testbed-manager] 2025-08-29 18:52:12.074338 | orchestrator | 2025-08-29 18:52:12.074346 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 18:52:12.074353 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-08-29 18:52:12.074359 | orchestrator | 2025-08-29 18:52:12.475005 | orchestrator | ok: Runtime: 0:02:23.546883 2025-08-29 18:52:12.494991 | 2025-08-29 18:52:12.495990 | TASK [Reboot manager] 2025-08-29 18:52:14.037779 | orchestrator | ok: Runtime: 0:00:00.976613 2025-08-29 18:52:14.053304 | 2025-08-29 18:52:14.053472 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-08-29 18:52:30.643332 | orchestrator | ok 2025-08-29 18:52:30.654342 | 2025-08-29 18:52:30.654471 | TASK [Wait a little longer for the manager so that everything is ready] 2025-08-29 18:53:30.705394 | orchestrator | ok 2025-08-29 18:53:30.716210 | 2025-08-29 18:53:30.716366 | TASK [Deploy manager + bootstrap nodes] 2025-08-29 18:53:33.261681 | orchestrator | 2025-08-29 18:53:33.261865 | orchestrator | # DEPLOY MANAGER 2025-08-29 18:53:33.261889 | orchestrator | 2025-08-29 18:53:33.261903 | orchestrator | + set -e 2025-08-29 18:53:33.261917 | orchestrator | + echo 2025-08-29 18:53:33.261930 | orchestrator | + echo '# DEPLOY MANAGER' 2025-08-29 18:53:33.261947 | orchestrator | + echo 2025-08-29 18:53:33.261999 | orchestrator | + cat /opt/manager-vars.sh 2025-08-29 18:53:33.264429 | orchestrator | export NUMBER_OF_NODES=6 2025-08-29 18:53:33.264468 | orchestrator | 2025-08-29 18:53:33.264483 | orchestrator | export CEPH_VERSION=reef 2025-08-29 18:53:33.264497 | orchestrator | export CONFIGURATION_VERSION=main 2025-08-29 18:53:33.264510 | orchestrator | export MANAGER_VERSION=latest 2025-08-29 18:53:33.264533 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-08-29 18:53:33.264544 | orchestrator | 2025-08-29 18:53:33.264562 | orchestrator | export ARA=false 2025-08-29 18:53:33.264574 | orchestrator | export DEPLOY_MODE=manager 2025-08-29 18:53:33.264592 | orchestrator | export TEMPEST=false 2025-08-29 18:53:33.264604 | orchestrator | export IS_ZUUL=true 2025-08-29 18:53:33.264615 | orchestrator | 2025-08-29 18:53:33.264634 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.11 2025-08-29 18:53:33.264646 | orchestrator | export EXTERNAL_API=false 2025-08-29 18:53:33.264657 | orchestrator | 2025-08-29 18:53:33.264667 | orchestrator | export IMAGE_USER=ubuntu 2025-08-29 18:53:33.264681 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-08-29 18:53:33.264691 | orchestrator | 2025-08-29 18:53:33.264702 | orchestrator | export CEPH_STACK=ceph-ansible 2025-08-29 18:53:33.264720 | orchestrator | 2025-08-29 18:53:33.264732 | orchestrator | + echo 2025-08-29 18:53:33.264744 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-08-29 18:53:33.265414 | orchestrator | ++ export INTERACTIVE=false 2025-08-29 18:53:33.265433 | orchestrator | ++ INTERACTIVE=false 2025-08-29 18:53:33.265446 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-08-29 18:53:33.265463 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-08-29 18:53:33.265636 | orchestrator | + source /opt/manager-vars.sh 2025-08-29 18:53:33.265781 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-08-29 18:53:33.265797 | orchestrator | ++ NUMBER_OF_NODES=6 2025-08-29 18:53:33.265809 | orchestrator | ++ export CEPH_VERSION=reef 2025-08-29 18:53:33.265820 | orchestrator | ++ CEPH_VERSION=reef 2025-08-29 18:53:33.265831 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-08-29 18:53:33.265843 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-08-29 18:53:33.265853 | orchestrator | ++ export MANAGER_VERSION=latest 2025-08-29 18:53:33.265864 | orchestrator | ++ MANAGER_VERSION=latest 2025-08-29 18:53:33.265875 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-08-29 18:53:33.265904 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-08-29 18:53:33.265916 | orchestrator | ++ export ARA=false 2025-08-29 18:53:33.265927 | orchestrator | ++ ARA=false 2025-08-29 18:53:33.265938 | orchestrator | ++ export DEPLOY_MODE=manager 2025-08-29 18:53:33.265949 | orchestrator | ++ DEPLOY_MODE=manager 2025-08-29 18:53:33.265959 | orchestrator | ++ export TEMPEST=false 2025-08-29 18:53:33.265970 | orchestrator | ++ TEMPEST=false 2025-08-29 18:53:33.265980 | orchestrator | ++ export IS_ZUUL=true 2025-08-29 18:53:33.265991 | orchestrator | ++ IS_ZUUL=true 2025-08-29 18:53:33.266002 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.11 2025-08-29 18:53:33.266012 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.11 2025-08-29 18:53:33.266183 | orchestrator | ++ export EXTERNAL_API=false 2025-08-29 18:53:33.266196 | orchestrator | ++ EXTERNAL_API=false 2025-08-29 18:53:33.266206 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-08-29 18:53:33.266217 | orchestrator | ++ IMAGE_USER=ubuntu 2025-08-29 18:53:33.266228 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-08-29 18:53:33.266238 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-08-29 18:53:33.266250 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-08-29 18:53:33.266261 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-08-29 18:53:33.266272 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-08-29 18:53:33.334167 | orchestrator | + docker version 2025-08-29 18:53:33.633040 | orchestrator | Client: Docker Engine - Community 2025-08-29 18:53:33.633160 | orchestrator | Version: 27.5.1 2025-08-29 18:53:33.633178 | orchestrator | API version: 1.47 2025-08-29 18:53:33.633190 | orchestrator | Go version: go1.22.11 2025-08-29 18:53:33.633202 | orchestrator | Git commit: 9f9e405 2025-08-29 18:53:33.633213 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-08-29 18:53:33.633225 | orchestrator | OS/Arch: linux/amd64 2025-08-29 18:53:33.633236 | orchestrator | Context: default 2025-08-29 18:53:33.633247 | orchestrator | 2025-08-29 18:53:33.633259 | orchestrator | Server: Docker Engine - Community 2025-08-29 18:53:33.633270 | orchestrator | Engine: 2025-08-29 18:53:33.633282 | orchestrator | Version: 27.5.1 2025-08-29 18:53:33.633292 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-08-29 18:53:33.633359 | orchestrator | Go version: go1.22.11 2025-08-29 18:53:33.633372 | orchestrator | Git commit: 4c9b3b0 2025-08-29 18:53:33.633383 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-08-29 18:53:33.633394 | orchestrator | OS/Arch: linux/amd64 2025-08-29 18:53:33.633404 | orchestrator | Experimental: false 2025-08-29 18:53:33.633415 | orchestrator | containerd: 2025-08-29 18:53:33.633426 | orchestrator | Version: 1.7.27 2025-08-29 18:53:33.633437 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-08-29 18:53:33.633449 | orchestrator | runc: 2025-08-29 18:53:33.633460 | orchestrator | Version: 1.2.5 2025-08-29 18:53:33.633471 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-08-29 18:53:33.633482 | orchestrator | docker-init: 2025-08-29 18:53:33.633492 | orchestrator | Version: 0.19.0 2025-08-29 18:53:33.633504 | orchestrator | GitCommit: de40ad0 2025-08-29 18:53:33.637301 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-08-29 18:53:33.649969 | orchestrator | + set -e 2025-08-29 18:53:33.650002 | orchestrator | + source /opt/manager-vars.sh 2025-08-29 18:53:33.650010 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-08-29 18:53:33.650047 | orchestrator | ++ NUMBER_OF_NODES=6 2025-08-29 18:53:33.650056 | orchestrator | ++ export CEPH_VERSION=reef 2025-08-29 18:53:33.650065 | orchestrator | ++ CEPH_VERSION=reef 2025-08-29 18:53:33.650074 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-08-29 18:53:33.650084 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-08-29 18:53:33.650094 | orchestrator | ++ export MANAGER_VERSION=latest 2025-08-29 18:53:33.650104 | orchestrator | ++ MANAGER_VERSION=latest 2025-08-29 18:53:33.650128 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-08-29 18:53:33.650136 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-08-29 18:53:33.650145 | orchestrator | ++ export ARA=false 2025-08-29 18:53:33.650154 | orchestrator | ++ ARA=false 2025-08-29 18:53:33.650162 | orchestrator | ++ export DEPLOY_MODE=manager 2025-08-29 18:53:33.650172 | orchestrator | ++ DEPLOY_MODE=manager 2025-08-29 18:53:33.650180 | orchestrator | ++ export TEMPEST=false 2025-08-29 18:53:33.650188 | orchestrator | ++ TEMPEST=false 2025-08-29 18:53:33.650198 | orchestrator | ++ export IS_ZUUL=true 2025-08-29 18:53:33.650204 | orchestrator | ++ IS_ZUUL=true 2025-08-29 18:53:33.650209 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.11 2025-08-29 18:53:33.650214 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.11 2025-08-29 18:53:33.650220 | orchestrator | ++ export EXTERNAL_API=false 2025-08-29 18:53:33.650225 | orchestrator | ++ EXTERNAL_API=false 2025-08-29 18:53:33.650230 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-08-29 18:53:33.650235 | orchestrator | ++ IMAGE_USER=ubuntu 2025-08-29 18:53:33.650240 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-08-29 18:53:33.650245 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-08-29 18:53:33.650250 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-08-29 18:53:33.650255 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-08-29 18:53:33.650261 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-08-29 18:53:33.650271 | orchestrator | ++ export INTERACTIVE=false 2025-08-29 18:53:33.650276 | orchestrator | ++ INTERACTIVE=false 2025-08-29 18:53:33.650281 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-08-29 18:53:33.650294 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-08-29 18:53:33.650302 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-08-29 18:53:33.650307 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-08-29 18:53:33.650366 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-08-29 18:53:33.659103 | orchestrator | + set -e 2025-08-29 18:53:33.659214 | orchestrator | + VERSION=reef 2025-08-29 18:53:33.659779 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-08-29 18:53:33.665989 | orchestrator | + [[ -n ceph_version: reef ]] 2025-08-29 18:53:33.666075 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-08-29 18:53:33.672355 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-08-29 18:53:33.678670 | orchestrator | + set -e 2025-08-29 18:53:33.678713 | orchestrator | + VERSION=2024.2 2025-08-29 18:53:33.678731 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-08-29 18:53:33.681007 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-08-29 18:53:33.681055 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-08-29 18:53:33.684656 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-08-29 18:53:33.685366 | orchestrator | ++ semver latest 7.0.0 2025-08-29 18:53:33.750923 | orchestrator | + [[ -1 -ge 0 ]] 2025-08-29 18:53:33.751008 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-08-29 18:53:33.751028 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-08-29 18:53:33.751040 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-08-29 18:53:33.844205 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-08-29 18:53:33.845943 | orchestrator | + source /opt/venv/bin/activate 2025-08-29 18:53:33.847169 | orchestrator | ++ deactivate nondestructive 2025-08-29 18:53:33.847210 | orchestrator | ++ '[' -n '' ']' 2025-08-29 18:53:33.847228 | orchestrator | ++ '[' -n '' ']' 2025-08-29 18:53:33.847244 | orchestrator | ++ hash -r 2025-08-29 18:53:33.847294 | orchestrator | ++ '[' -n '' ']' 2025-08-29 18:53:33.847308 | orchestrator | ++ unset VIRTUAL_ENV 2025-08-29 18:53:33.847319 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-08-29 18:53:33.847340 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-08-29 18:53:33.847612 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-08-29 18:53:33.847636 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-08-29 18:53:33.847652 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-08-29 18:53:33.847663 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-08-29 18:53:33.847881 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-08-29 18:53:33.847899 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-08-29 18:53:33.847918 | orchestrator | ++ export PATH 2025-08-29 18:53:33.847933 | orchestrator | ++ '[' -n '' ']' 2025-08-29 18:53:33.847944 | orchestrator | ++ '[' -z '' ']' 2025-08-29 18:53:33.847965 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-08-29 18:53:33.847976 | orchestrator | ++ PS1='(venv) ' 2025-08-29 18:53:33.847987 | orchestrator | ++ export PS1 2025-08-29 18:53:33.848005 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-08-29 18:53:33.848019 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-08-29 18:53:33.848034 | orchestrator | ++ hash -r 2025-08-29 18:53:33.848326 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-08-29 18:53:35.257025 | orchestrator | 2025-08-29 18:53:35.257157 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-08-29 18:53:35.257174 | orchestrator | 2025-08-29 18:53:35.257186 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-08-29 18:53:35.832583 | orchestrator | ok: [testbed-manager] 2025-08-29 18:53:35.832678 | orchestrator | 2025-08-29 18:53:35.832693 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-08-29 18:53:36.870402 | orchestrator | changed: [testbed-manager] 2025-08-29 18:53:36.870515 | orchestrator | 2025-08-29 18:53:36.870531 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-08-29 18:53:36.870544 | orchestrator | 2025-08-29 18:53:36.870556 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 18:53:39.342385 | orchestrator | ok: [testbed-manager] 2025-08-29 18:53:39.342494 | orchestrator | 2025-08-29 18:53:39.342511 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-08-29 18:53:39.402383 | orchestrator | ok: [testbed-manager] 2025-08-29 18:53:39.402467 | orchestrator | 2025-08-29 18:53:39.402483 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-08-29 18:53:39.912903 | orchestrator | changed: [testbed-manager] 2025-08-29 18:53:39.913002 | orchestrator | 2025-08-29 18:53:39.913021 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-08-29 18:53:39.960764 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:53:39.960834 | orchestrator | 2025-08-29 18:53:39.960849 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-08-29 18:53:40.336183 | orchestrator | changed: [testbed-manager] 2025-08-29 18:53:40.336284 | orchestrator | 2025-08-29 18:53:40.336300 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-08-29 18:53:40.386760 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:53:40.386851 | orchestrator | 2025-08-29 18:53:40.386868 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-08-29 18:53:40.721851 | orchestrator | ok: [testbed-manager] 2025-08-29 18:53:40.721944 | orchestrator | 2025-08-29 18:53:40.721960 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-08-29 18:53:40.842667 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:53:40.842738 | orchestrator | 2025-08-29 18:53:40.842752 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-08-29 18:53:40.842763 | orchestrator | 2025-08-29 18:53:40.842777 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 18:53:42.701476 | orchestrator | ok: [testbed-manager] 2025-08-29 18:53:42.701578 | orchestrator | 2025-08-29 18:53:42.701595 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-08-29 18:53:42.829885 | orchestrator | included: osism.services.traefik for testbed-manager 2025-08-29 18:53:42.829968 | orchestrator | 2025-08-29 18:53:42.829982 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-08-29 18:53:42.890387 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-08-29 18:53:42.890454 | orchestrator | 2025-08-29 18:53:42.890468 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-08-29 18:53:44.067070 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-08-29 18:53:44.067196 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-08-29 18:53:44.067212 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-08-29 18:53:44.067224 | orchestrator | 2025-08-29 18:53:44.067237 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-08-29 18:53:46.029631 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-08-29 18:53:46.029733 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-08-29 18:53:46.029750 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-08-29 18:53:46.029762 | orchestrator | 2025-08-29 18:53:46.029775 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-08-29 18:53:46.727855 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 18:53:46.727950 | orchestrator | changed: [testbed-manager] 2025-08-29 18:53:46.727966 | orchestrator | 2025-08-29 18:53:46.727978 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-08-29 18:53:47.409576 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 18:53:47.409667 | orchestrator | changed: [testbed-manager] 2025-08-29 18:53:47.409682 | orchestrator | 2025-08-29 18:53:47.409694 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-08-29 18:53:47.464427 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:53:47.464474 | orchestrator | 2025-08-29 18:53:47.464487 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-08-29 18:53:47.863689 | orchestrator | ok: [testbed-manager] 2025-08-29 18:53:47.863777 | orchestrator | 2025-08-29 18:53:47.863791 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-08-29 18:53:47.941279 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-08-29 18:53:47.941351 | orchestrator | 2025-08-29 18:53:47.941364 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-08-29 18:53:49.075902 | orchestrator | changed: [testbed-manager] 2025-08-29 18:53:49.076008 | orchestrator | 2025-08-29 18:53:49.076025 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-08-29 18:53:49.966432 | orchestrator | changed: [testbed-manager] 2025-08-29 18:53:49.966528 | orchestrator | 2025-08-29 18:53:49.966543 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-08-29 18:54:01.953233 | orchestrator | changed: [testbed-manager] 2025-08-29 18:54:01.953339 | orchestrator | 2025-08-29 18:54:01.953357 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-08-29 18:54:02.013970 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:54:02.014177 | orchestrator | 2025-08-29 18:54:02.014198 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-08-29 18:54:02.014212 | orchestrator | 2025-08-29 18:54:02.014223 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 18:54:03.960487 | orchestrator | ok: [testbed-manager] 2025-08-29 18:54:03.960586 | orchestrator | 2025-08-29 18:54:03.960632 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-08-29 18:54:04.093547 | orchestrator | included: osism.services.manager for testbed-manager 2025-08-29 18:54:04.093642 | orchestrator | 2025-08-29 18:54:04.093657 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-08-29 18:54:04.174411 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 18:54:04.174504 | orchestrator | 2025-08-29 18:54:04.174520 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-08-29 18:54:06.977075 | orchestrator | ok: [testbed-manager] 2025-08-29 18:54:06.977203 | orchestrator | 2025-08-29 18:54:06.977219 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-08-29 18:54:07.029950 | orchestrator | ok: [testbed-manager] 2025-08-29 18:54:07.030070 | orchestrator | 2025-08-29 18:54:07.030094 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-08-29 18:54:07.165576 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-08-29 18:54:07.165660 | orchestrator | 2025-08-29 18:54:07.165676 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-08-29 18:54:10.167407 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-08-29 18:54:10.167509 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-08-29 18:54:10.167523 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-08-29 18:54:10.167535 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-08-29 18:54:10.167546 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-08-29 18:54:10.167557 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-08-29 18:54:10.167568 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-08-29 18:54:10.167580 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-08-29 18:54:10.167591 | orchestrator | 2025-08-29 18:54:10.167603 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-08-29 18:54:10.822554 | orchestrator | changed: [testbed-manager] 2025-08-29 18:54:10.822645 | orchestrator | 2025-08-29 18:54:10.822660 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-08-29 18:54:11.490841 | orchestrator | changed: [testbed-manager] 2025-08-29 18:54:11.490936 | orchestrator | 2025-08-29 18:54:11.490952 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-08-29 18:54:11.575012 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-08-29 18:54:11.575084 | orchestrator | 2025-08-29 18:54:11.575124 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-08-29 18:54:12.867582 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-08-29 18:54:12.867689 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-08-29 18:54:12.867706 | orchestrator | 2025-08-29 18:54:12.867720 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-08-29 18:54:13.534832 | orchestrator | changed: [testbed-manager] 2025-08-29 18:54:13.534942 | orchestrator | 2025-08-29 18:54:13.534959 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-08-29 18:54:13.592080 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:54:13.592174 | orchestrator | 2025-08-29 18:54:13.592189 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-08-29 18:54:13.646897 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:54:13.646957 | orchestrator | 2025-08-29 18:54:13.646972 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-08-29 18:54:13.710554 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-08-29 18:54:13.710605 | orchestrator | 2025-08-29 18:54:13.710620 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-08-29 18:54:15.147013 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 18:54:15.147150 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 18:54:15.147196 | orchestrator | changed: [testbed-manager] 2025-08-29 18:54:15.147210 | orchestrator | 2025-08-29 18:54:15.147222 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-08-29 18:54:15.833313 | orchestrator | changed: [testbed-manager] 2025-08-29 18:54:15.833407 | orchestrator | 2025-08-29 18:54:15.833422 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-08-29 18:54:15.891902 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:54:15.891947 | orchestrator | 2025-08-29 18:54:15.891960 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-08-29 18:54:15.995377 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-08-29 18:54:15.995432 | orchestrator | 2025-08-29 18:54:15.995445 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-08-29 18:54:16.597866 | orchestrator | changed: [testbed-manager] 2025-08-29 18:54:16.597970 | orchestrator | 2025-08-29 18:54:16.597985 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-08-29 18:54:17.020926 | orchestrator | changed: [testbed-manager] 2025-08-29 18:54:17.021023 | orchestrator | 2025-08-29 18:54:17.021039 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-08-29 18:54:18.307908 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-08-29 18:54:18.308010 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-08-29 18:54:18.308026 | orchestrator | 2025-08-29 18:54:18.308040 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-08-29 18:54:18.945568 | orchestrator | changed: [testbed-manager] 2025-08-29 18:54:18.945659 | orchestrator | 2025-08-29 18:54:18.945674 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-08-29 18:54:19.344128 | orchestrator | ok: [testbed-manager] 2025-08-29 18:54:19.344216 | orchestrator | 2025-08-29 18:54:19.344230 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-08-29 18:54:19.724471 | orchestrator | changed: [testbed-manager] 2025-08-29 18:54:19.724559 | orchestrator | 2025-08-29 18:54:19.724573 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-08-29 18:54:19.775715 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:54:19.775743 | orchestrator | 2025-08-29 18:54:19.775755 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-08-29 18:54:19.852208 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-08-29 18:54:19.852247 | orchestrator | 2025-08-29 18:54:19.852260 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-08-29 18:54:19.909731 | orchestrator | ok: [testbed-manager] 2025-08-29 18:54:19.909802 | orchestrator | 2025-08-29 18:54:19.909816 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-08-29 18:54:22.142730 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-08-29 18:54:22.142819 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-08-29 18:54:22.142834 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-08-29 18:54:22.142845 | orchestrator | 2025-08-29 18:54:22.142858 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-08-29 18:54:22.890133 | orchestrator | changed: [testbed-manager] 2025-08-29 18:54:22.890229 | orchestrator | 2025-08-29 18:54:22.890244 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-08-29 18:54:23.630990 | orchestrator | changed: [testbed-manager] 2025-08-29 18:54:23.631196 | orchestrator | 2025-08-29 18:54:23.631216 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-08-29 18:54:24.384318 | orchestrator | changed: [testbed-manager] 2025-08-29 18:54:24.384420 | orchestrator | 2025-08-29 18:54:24.384436 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-08-29 18:54:24.458527 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-08-29 18:54:24.458619 | orchestrator | 2025-08-29 18:54:24.458637 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-08-29 18:54:24.502470 | orchestrator | ok: [testbed-manager] 2025-08-29 18:54:24.502515 | orchestrator | 2025-08-29 18:54:24.502528 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-08-29 18:54:25.255398 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-08-29 18:54:25.255443 | orchestrator | 2025-08-29 18:54:25.255449 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-08-29 18:54:25.348831 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-08-29 18:54:25.348881 | orchestrator | 2025-08-29 18:54:25.348887 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-08-29 18:54:26.105086 | orchestrator | changed: [testbed-manager] 2025-08-29 18:54:26.105212 | orchestrator | 2025-08-29 18:54:26.105228 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-08-29 18:54:26.719440 | orchestrator | ok: [testbed-manager] 2025-08-29 18:54:26.719542 | orchestrator | 2025-08-29 18:54:26.719556 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-08-29 18:54:26.778890 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:54:26.778955 | orchestrator | 2025-08-29 18:54:26.778968 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-08-29 18:54:26.839863 | orchestrator | ok: [testbed-manager] 2025-08-29 18:54:26.839919 | orchestrator | 2025-08-29 18:54:26.839934 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-08-29 18:54:27.722522 | orchestrator | changed: [testbed-manager] 2025-08-29 18:54:27.722618 | orchestrator | 2025-08-29 18:54:27.722633 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-08-29 18:56:05.722693 | orchestrator | changed: [testbed-manager] 2025-08-29 18:56:05.722808 | orchestrator | 2025-08-29 18:56:05.722825 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-08-29 18:56:06.807232 | orchestrator | ok: [testbed-manager] 2025-08-29 18:56:06.807345 | orchestrator | 2025-08-29 18:56:06.807362 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-08-29 18:56:06.859778 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:56:06.859846 | orchestrator | 2025-08-29 18:56:06.859862 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-08-29 18:56:09.429407 | orchestrator | changed: [testbed-manager] 2025-08-29 18:56:09.429513 | orchestrator | 2025-08-29 18:56:09.429530 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-08-29 18:56:09.585028 | orchestrator | ok: [testbed-manager] 2025-08-29 18:56:09.585173 | orchestrator | 2025-08-29 18:56:09.585190 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-08-29 18:56:09.585203 | orchestrator | 2025-08-29 18:56:09.585215 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-08-29 18:56:09.640562 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:56:09.640612 | orchestrator | 2025-08-29 18:56:09.640625 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-08-29 18:57:09.703592 | orchestrator | Pausing for 60 seconds 2025-08-29 18:57:09.703713 | orchestrator | changed: [testbed-manager] 2025-08-29 18:57:09.703729 | orchestrator | 2025-08-29 18:57:09.703743 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-08-29 18:57:13.998243 | orchestrator | changed: [testbed-manager] 2025-08-29 18:57:13.998350 | orchestrator | 2025-08-29 18:57:13.998368 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-08-29 18:57:55.784316 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-08-29 18:57:55.784434 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-08-29 18:57:55.784452 | orchestrator | changed: [testbed-manager] 2025-08-29 18:57:55.784465 | orchestrator | 2025-08-29 18:57:55.784477 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-08-29 18:58:05.918998 | orchestrator | changed: [testbed-manager] 2025-08-29 18:58:05.919164 | orchestrator | 2025-08-29 18:58:05.919184 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-08-29 18:58:06.019740 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-08-29 18:58:06.019817 | orchestrator | 2025-08-29 18:58:06.019834 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-08-29 18:58:06.019846 | orchestrator | 2025-08-29 18:58:06.019858 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-08-29 18:58:06.077046 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:58:06.077134 | orchestrator | 2025-08-29 18:58:06.077148 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 18:58:06.077161 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-08-29 18:58:06.077173 | orchestrator | 2025-08-29 18:58:06.212717 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-08-29 18:58:06.212776 | orchestrator | + deactivate 2025-08-29 18:58:06.212793 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-08-29 18:58:06.212808 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-08-29 18:58:06.212819 | orchestrator | + export PATH 2025-08-29 18:58:06.212830 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-08-29 18:58:06.212862 | orchestrator | + '[' -n '' ']' 2025-08-29 18:58:06.212873 | orchestrator | + hash -r 2025-08-29 18:58:06.212884 | orchestrator | + '[' -n '' ']' 2025-08-29 18:58:06.212895 | orchestrator | + unset VIRTUAL_ENV 2025-08-29 18:58:06.212906 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-08-29 18:58:06.212917 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-08-29 18:58:06.212928 | orchestrator | + unset -f deactivate 2025-08-29 18:58:06.212940 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-08-29 18:58:06.219252 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-08-29 18:58:06.219276 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-08-29 18:58:06.219288 | orchestrator | + local max_attempts=60 2025-08-29 18:58:06.219298 | orchestrator | + local name=ceph-ansible 2025-08-29 18:58:06.219309 | orchestrator | + local attempt_num=1 2025-08-29 18:58:06.220053 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 18:58:06.253181 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 18:58:06.253210 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-08-29 18:58:06.253221 | orchestrator | + local max_attempts=60 2025-08-29 18:58:06.253232 | orchestrator | + local name=kolla-ansible 2025-08-29 18:58:06.253243 | orchestrator | + local attempt_num=1 2025-08-29 18:58:06.254266 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-08-29 18:58:06.285602 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 18:58:06.285653 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-08-29 18:58:06.285665 | orchestrator | + local max_attempts=60 2025-08-29 18:58:06.285676 | orchestrator | + local name=osism-ansible 2025-08-29 18:58:06.285687 | orchestrator | + local attempt_num=1 2025-08-29 18:58:06.285877 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-08-29 18:58:06.321222 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 18:58:06.321261 | orchestrator | + [[ true == \t\r\u\e ]] 2025-08-29 18:58:06.321272 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-08-29 18:58:07.128193 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-08-29 18:58:07.367404 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-08-29 18:58:07.367495 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-08-29 18:58:07.367510 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-08-29 18:58:07.367522 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-08-29 18:58:07.367557 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-08-29 18:58:07.367577 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-08-29 18:58:07.367590 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-08-29 18:58:07.367601 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 53 seconds (healthy) 2025-08-29 18:58:07.367611 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-08-29 18:58:07.367622 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-08-29 18:58:07.367633 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-08-29 18:58:07.367644 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-08-29 18:58:07.367654 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-08-29 18:58:07.367665 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-08-29 18:58:07.367676 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-08-29 18:58:07.375772 | orchestrator | ++ semver latest 7.0.0 2025-08-29 18:58:07.429517 | orchestrator | + [[ -1 -ge 0 ]] 2025-08-29 18:58:07.429559 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-08-29 18:58:07.429574 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-08-29 18:58:07.434294 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-08-29 18:58:19.695747 | orchestrator | 2025-08-29 18:58:19 | INFO  | Task 42dc3f96-5fa8-4895-8592-ed3b99b027a5 (resolvconf) was prepared for execution. 2025-08-29 18:58:19.695849 | orchestrator | 2025-08-29 18:58:19 | INFO  | It takes a moment until task 42dc3f96-5fa8-4895-8592-ed3b99b027a5 (resolvconf) has been started and output is visible here. 2025-08-29 18:58:33.430412 | orchestrator | 2025-08-29 18:58:33.430527 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-08-29 18:58:33.430543 | orchestrator | 2025-08-29 18:58:33.430558 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 18:58:33.430570 | orchestrator | Friday 29 August 2025 18:58:23 +0000 (0:00:00.149) 0:00:00.149 ********* 2025-08-29 18:58:33.430581 | orchestrator | ok: [testbed-manager] 2025-08-29 18:58:33.430593 | orchestrator | 2025-08-29 18:58:33.430604 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-08-29 18:58:33.430620 | orchestrator | Friday 29 August 2025 18:58:27 +0000 (0:00:03.983) 0:00:04.133 ********* 2025-08-29 18:58:33.430631 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:58:33.430664 | orchestrator | 2025-08-29 18:58:33.430676 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-08-29 18:58:33.430687 | orchestrator | Friday 29 August 2025 18:58:27 +0000 (0:00:00.066) 0:00:04.200 ********* 2025-08-29 18:58:33.430698 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-08-29 18:58:33.430709 | orchestrator | 2025-08-29 18:58:33.430720 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-08-29 18:58:33.430731 | orchestrator | Friday 29 August 2025 18:58:27 +0000 (0:00:00.100) 0:00:04.300 ********* 2025-08-29 18:58:33.430742 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 18:58:33.430752 | orchestrator | 2025-08-29 18:58:33.430763 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-08-29 18:58:33.430774 | orchestrator | Friday 29 August 2025 18:58:27 +0000 (0:00:00.075) 0:00:04.376 ********* 2025-08-29 18:58:33.430784 | orchestrator | ok: [testbed-manager] 2025-08-29 18:58:33.430795 | orchestrator | 2025-08-29 18:58:33.430806 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-08-29 18:58:33.430816 | orchestrator | Friday 29 August 2025 18:58:28 +0000 (0:00:01.119) 0:00:05.495 ********* 2025-08-29 18:58:33.430827 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:58:33.430837 | orchestrator | 2025-08-29 18:58:33.430848 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-08-29 18:58:33.430859 | orchestrator | Friday 29 August 2025 18:58:29 +0000 (0:00:00.070) 0:00:05.566 ********* 2025-08-29 18:58:33.430869 | orchestrator | ok: [testbed-manager] 2025-08-29 18:58:33.430880 | orchestrator | 2025-08-29 18:58:33.430890 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-08-29 18:58:33.430901 | orchestrator | Friday 29 August 2025 18:58:29 +0000 (0:00:00.510) 0:00:06.076 ********* 2025-08-29 18:58:33.430912 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:58:33.430922 | orchestrator | 2025-08-29 18:58:33.430933 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-08-29 18:58:33.430945 | orchestrator | Friday 29 August 2025 18:58:29 +0000 (0:00:00.080) 0:00:06.156 ********* 2025-08-29 18:58:33.430958 | orchestrator | changed: [testbed-manager] 2025-08-29 18:58:33.430970 | orchestrator | 2025-08-29 18:58:33.430982 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-08-29 18:58:33.430994 | orchestrator | Friday 29 August 2025 18:58:30 +0000 (0:00:00.516) 0:00:06.673 ********* 2025-08-29 18:58:33.431005 | orchestrator | changed: [testbed-manager] 2025-08-29 18:58:33.431017 | orchestrator | 2025-08-29 18:58:33.431029 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-08-29 18:58:33.431041 | orchestrator | Friday 29 August 2025 18:58:31 +0000 (0:00:01.097) 0:00:07.770 ********* 2025-08-29 18:58:33.431052 | orchestrator | ok: [testbed-manager] 2025-08-29 18:58:33.431064 | orchestrator | 2025-08-29 18:58:33.431075 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-08-29 18:58:33.431121 | orchestrator | Friday 29 August 2025 18:58:32 +0000 (0:00:00.944) 0:00:08.715 ********* 2025-08-29 18:58:33.431134 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-08-29 18:58:33.431146 | orchestrator | 2025-08-29 18:58:33.431167 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-08-29 18:58:33.431179 | orchestrator | Friday 29 August 2025 18:58:32 +0000 (0:00:00.077) 0:00:08.792 ********* 2025-08-29 18:58:33.431191 | orchestrator | changed: [testbed-manager] 2025-08-29 18:58:33.431203 | orchestrator | 2025-08-29 18:58:33.431215 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 18:58:33.431227 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 18:58:33.431248 | orchestrator | 2025-08-29 18:58:33.431260 | orchestrator | 2025-08-29 18:58:33.431271 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 18:58:33.431284 | orchestrator | Friday 29 August 2025 18:58:33 +0000 (0:00:01.015) 0:00:09.808 ********* 2025-08-29 18:58:33.431296 | orchestrator | =============================================================================== 2025-08-29 18:58:33.431308 | orchestrator | Gathering Facts --------------------------------------------------------- 3.98s 2025-08-29 18:58:33.431319 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.12s 2025-08-29 18:58:33.431330 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.10s 2025-08-29 18:58:33.431340 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.02s 2025-08-29 18:58:33.431351 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.94s 2025-08-29 18:58:33.431362 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.52s 2025-08-29 18:58:33.431389 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.51s 2025-08-29 18:58:33.431401 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.10s 2025-08-29 18:58:33.431412 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-08-29 18:58:33.431422 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-08-29 18:58:33.431433 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2025-08-29 18:58:33.431443 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2025-08-29 18:58:33.431454 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-08-29 18:58:33.636277 | orchestrator | + osism apply sshconfig 2025-08-29 18:58:45.535869 | orchestrator | 2025-08-29 18:58:45 | INFO  | Task bc048330-fb9d-4f4b-9736-e9847969adaa (sshconfig) was prepared for execution. 2025-08-29 18:58:45.535980 | orchestrator | 2025-08-29 18:58:45 | INFO  | It takes a moment until task bc048330-fb9d-4f4b-9736-e9847969adaa (sshconfig) has been started and output is visible here. 2025-08-29 18:58:57.345030 | orchestrator | 2025-08-29 18:58:57.345185 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-08-29 18:58:57.345204 | orchestrator | 2025-08-29 18:58:57.345217 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-08-29 18:58:57.345229 | orchestrator | Friday 29 August 2025 18:58:49 +0000 (0:00:00.180) 0:00:00.180 ********* 2025-08-29 18:58:57.345241 | orchestrator | ok: [testbed-manager] 2025-08-29 18:58:57.345252 | orchestrator | 2025-08-29 18:58:57.345263 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-08-29 18:58:57.345274 | orchestrator | Friday 29 August 2025 18:58:50 +0000 (0:00:00.557) 0:00:00.737 ********* 2025-08-29 18:58:57.345285 | orchestrator | changed: [testbed-manager] 2025-08-29 18:58:57.345297 | orchestrator | 2025-08-29 18:58:57.345308 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-08-29 18:58:57.345319 | orchestrator | Friday 29 August 2025 18:58:50 +0000 (0:00:00.537) 0:00:01.275 ********* 2025-08-29 18:58:57.345329 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-08-29 18:58:57.345341 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-08-29 18:58:57.345352 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-08-29 18:58:57.345363 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-08-29 18:58:57.345375 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-08-29 18:58:57.345385 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-08-29 18:58:57.345416 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-08-29 18:58:57.345427 | orchestrator | 2025-08-29 18:58:57.345460 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-08-29 18:58:57.345472 | orchestrator | Friday 29 August 2025 18:58:56 +0000 (0:00:05.771) 0:00:07.046 ********* 2025-08-29 18:58:57.345482 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:58:57.345493 | orchestrator | 2025-08-29 18:58:57.345504 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-08-29 18:58:57.345514 | orchestrator | Friday 29 August 2025 18:58:56 +0000 (0:00:00.072) 0:00:07.119 ********* 2025-08-29 18:58:57.345525 | orchestrator | changed: [testbed-manager] 2025-08-29 18:58:57.345536 | orchestrator | 2025-08-29 18:58:57.345547 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 18:58:57.345559 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 18:58:57.345570 | orchestrator | 2025-08-29 18:58:57.345583 | orchestrator | 2025-08-29 18:58:57.345595 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 18:58:57.345608 | orchestrator | Friday 29 August 2025 18:58:57 +0000 (0:00:00.591) 0:00:07.710 ********* 2025-08-29 18:58:57.345620 | orchestrator | =============================================================================== 2025-08-29 18:58:57.345632 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.77s 2025-08-29 18:58:57.345644 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.59s 2025-08-29 18:58:57.345656 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.56s 2025-08-29 18:58:57.345668 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.54s 2025-08-29 18:58:57.345680 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-08-29 18:58:57.640145 | orchestrator | + osism apply known-hosts 2025-08-29 18:59:09.763436 | orchestrator | 2025-08-29 18:59:09 | INFO  | Task 8bd8fe9a-a150-4b0f-98fa-f1fe6c366ec9 (known-hosts) was prepared for execution. 2025-08-29 18:59:09.763548 | orchestrator | 2025-08-29 18:59:09 | INFO  | It takes a moment until task 8bd8fe9a-a150-4b0f-98fa-f1fe6c366ec9 (known-hosts) has been started and output is visible here. 2025-08-29 18:59:26.224558 | orchestrator | 2025-08-29 18:59:26.224678 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-08-29 18:59:26.224695 | orchestrator | 2025-08-29 18:59:26.224707 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-08-29 18:59:26.224719 | orchestrator | Friday 29 August 2025 18:59:13 +0000 (0:00:00.177) 0:00:00.177 ********* 2025-08-29 18:59:26.224731 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-08-29 18:59:26.224743 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-08-29 18:59:26.224754 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-08-29 18:59:26.224765 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-08-29 18:59:26.224776 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-08-29 18:59:26.224787 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-08-29 18:59:26.224798 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-08-29 18:59:26.224809 | orchestrator | 2025-08-29 18:59:26.224820 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-08-29 18:59:26.224832 | orchestrator | Friday 29 August 2025 18:59:19 +0000 (0:00:06.056) 0:00:06.234 ********* 2025-08-29 18:59:26.224845 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-08-29 18:59:26.224857 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-08-29 18:59:26.224868 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-08-29 18:59:26.224900 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-08-29 18:59:26.224911 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-08-29 18:59:26.224932 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-08-29 18:59:26.224943 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-08-29 18:59:26.224954 | orchestrator | 2025-08-29 18:59:26.224965 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 18:59:26.224976 | orchestrator | Friday 29 August 2025 18:59:20 +0000 (0:00:00.167) 0:00:06.401 ********* 2025-08-29 18:59:26.224988 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKauBHDcgeNuVxHcw0R+G8f0cd7LDz6k/R9C8kbQRWhVM/adcTHu6vL4luHL25FIdJNAsAfEtEqVRl0I8/6+saM=) 2025-08-29 18:59:26.225003 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDFoa9BYe0omF1creLArrRs6nZ7GuT56UzYB/l9mBGsbt40ekQWgEb0RVYZiMOeWeohUFktJuJEf5S7wkWP05g+ZW19CvgwymtG4mpgjZ1wm7r4L049zlRR5k9mK1WNmmF7dIYSlLZ49MPKYaof9Eq7sMq/FJDYCq3cFNOm8PSTFE1S221XSlC/ZPDJKv8HuLso5XHGn2EKY7pdMJNm2Jb/lyY4trKoQhSrf1V2m8KeZVBzO00prLsjDJM5ZaYavHRKMu1InNLMk4B3nzlpblGiR4Oxph7VFhACkOzzwQi5h5rfM2HF1NXddIaNPOGZYACyMwYpWrFqggZHtwWCi5e1Qx9Qhx5bzZsDa67LPYtvIcRjBLs977neQHbW6rwiYabVTOFiOH8GVbH9vvcBKGQwtivOMy8joPW973UAvap0akfHiHeZ8XBZdnHlKnr+SA+djjYwGYtBQ7EvqX+HBJmp2jqIfLCyH5RRUXs9SUNogrdt/Gu5u18/YqLY88/4ASU=) 2025-08-29 18:59:26.225017 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE5rYImg4crqCilW6/N2sOhjTur808nYO1blHbdI5nms) 2025-08-29 18:59:26.225029 | orchestrator | 2025-08-29 18:59:26.225040 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 18:59:26.225051 | orchestrator | Friday 29 August 2025 18:59:21 +0000 (0:00:01.115) 0:00:07.517 ********* 2025-08-29 18:59:26.225080 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuaM3SdD0iLsOYjVBHe1xI7OOnW9xUiM8oYQrkRlnT8SKdicnIDpEPy9+IvRcw8858AHSccX31ADNjluRFTKNlWzOtR+7HS/4CshWUcLEz/9dlo5OiNkRJsSy/3pIrwoItLiLkrgKg1gZ7aiMVD5cagQS11CJf2iiIOHx+7eJpGruW4zaHad9hsj7r7gjl9E0KbxGzPy+btaJSpG8wxzt0Sis88iVd6+wt11DtVCt0zKn88FJGl/fY406zuHHqa9Wc3OLEF52eTWkGjpfZrgcZ975XBltvD3gDt0a5p6aojHJj1MWCTJs+EjP1Pini+vUBYZo2H1jLPrsOWpghhhR7PjJibMzkeamKtljD63QVqQuY+9A5Dj9tegiZsqWjlzon3hjvzgfy006c2IeZd0/yRi38xalvduZ/Bbiep9Jnn517UkX/yIcTbTjRpSai2lKvJAE+3hrJUx3PDxMYA3rWuGNOu7ngZs15XNmGGf90YnqdZ2vJHGBR4JfYYHAzJw0=) 2025-08-29 18:59:26.225121 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDQz1iaKdURDZn9FXVLlgjyWFm8KIRZiMepDjhkjMbVr93ecEjX+E8qTIgbmrcQ8eeKiYdk7JcuMh1jOFA10pfA=) 2025-08-29 18:59:26.225134 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG+fC7P7nXYhqP5f9NMlNId7kLs6cwqpCe6K2cQDEKBp) 2025-08-29 18:59:26.225146 | orchestrator | 2025-08-29 18:59:26.225159 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 18:59:26.225171 | orchestrator | Friday 29 August 2025 18:59:22 +0000 (0:00:01.036) 0:00:08.554 ********* 2025-08-29 18:59:26.225184 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCOW/zRwjzSXKZSgJYk7tMp2I1O1wdNgtkWS+9Rss+puu2v3L4FP8dYVX0o6VkUXsbYDRN6hgmryFM3XGZW8ACRgYiL2yQUeCGU7E+ymu0lunNK7NciGTDEuUCgzNwOmq0ICvsA+MBLo1W3DcTv2sX4EADG6/R8H61aohZb3MrBP5Osg6fgteWPv7dBM+C82dgXWFlOBDgjPPAvra8g2vbgRN7zj4yFNTeBOixnaSpDbNJCmfOtumRVA+0RKfyTjjEBrZSgKM2RGD5JWmXa7VuyhwiKs8+vEN9k5y+iEqPqlf2/DtlifEn3p/vXNAsSPG/eA0jbIQStmRXpxcN1u0YDQfQgEW/TzdNceO0aJvdEP5bQnposW4A9RCEqQeza4mi6gaYRd7FhLfSogB4R257mFffHEZzfdq87WCzzIGCJSBO5iVHlaCaXSXWy3FxbTxNMiy5aVpOyqUq8jd9xRQ28pzwF/GCUNC5YTlJGPpELu4ahvcv7YWaAyfywC3+WKq8=) 2025-08-29 18:59:26.225205 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEL+f2TiXb3DRPx3/rrLuUq+O62jYKwoA1ruyyMOZPZ8nC22St6qOzMMqnevcvi1uOH0vPxRKakAnXsO2M69NQc=) 2025-08-29 18:59:26.225216 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILdTezX3alAxpXLNwuU9OTLXUONy5HwavvEjoTCj+Qzu) 2025-08-29 18:59:26.225227 | orchestrator | 2025-08-29 18:59:26.225238 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 18:59:26.225248 | orchestrator | Friday 29 August 2025 18:59:23 +0000 (0:00:01.028) 0:00:09.583 ********* 2025-08-29 18:59:26.225325 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDFaVoaZ5M/1aGr7CTLLAYrKpCuroERfKiz6d7qss1VRIKS8270ZOYuH+ccSZU5ww3AHPAJc05KuynjtOXVI4IfFTxRq+taX08xS0c5rXnomJ3x9aXCAu6BbzIGz2YU8YaSeQNV66NNwQEbYY+43jY7KxONt6GEfXSdd/D7wxQzFNJvI1C7f/9JnJGTwtzMBnYSf5efLTAokIjz1Yzy/ZJacnFv1euqKxqlsrm/e4AgKA4YguE7zYxmBcyO8FHn+5PG7bOm4BGvx/98h0fueagiPbjzs4VaKaGmpaFpRK83lOlFPnQIDhjQb0jncdN8mKX6hvv/R9/1bIhKTpwWzFYrKG2WJ/tMTNmZp+gQBC3mVY0zp2zg3ZVHBGmL4+4OYYf/JEuqly/HZLEK4LJrcxYCOkE1ybtQAL5oJ1Y7bgnm4JWxcO7jTCzGueWJjulopXfWSexLO+wKwMgm8X8gXryasbdyKmVX4MSO41D2ZPAPmxqOUDgmh4DB8lW0RIu0Trc=) 2025-08-29 18:59:26.225338 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBL0pb7yQVZPzQaAckfuEk7EQAKUtVK9QgLyRgeq4LTp3uvUyO0djX35wptGLhMJE85qPBzMfYKBPcPGDggUm7Sg=) 2025-08-29 18:59:26.225349 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDtV4j9wpXveCZvgF/XCd9wqNFkFi4GMl6yVTvxNARJ4) 2025-08-29 18:59:26.225360 | orchestrator | 2025-08-29 18:59:26.225371 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 18:59:26.225381 | orchestrator | Friday 29 August 2025 18:59:24 +0000 (0:00:00.959) 0:00:10.542 ********* 2025-08-29 18:59:26.225392 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLFJtB5ChNlN/Pr3DXbQANybmWgR71HiZZIeGjOSjCRJbmDpfFBRm3U4gQMPXjQiNFxXxDf/6nrwlM0Fjjyox9E=) 2025-08-29 18:59:26.225403 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDEyO+lXviHlQ7wqG6ZY5Gn2SOzYpsCQg3oCYRrWyp5P) 2025-08-29 18:59:26.225415 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUDzSk68lkqEZ+KfPCjdETP/wwtOJEJKoCaoMysvKUMq0fnJW5fBqUtpeq+Itq8CZ1Yg5Uk6syqmz4sObJ/AYTgrCV/MgXYlLzTu3X/KzPp8/KFUDbw1IgHdrfZ9azYk8QNDUTYuZPDdytKd32ZLx2qkWxEJ+zvEwiCW6JAKugxbh1G45luUP52387YLf3hwueu3AO6x271pkfgs96pH5xNAhl3g/rS6XiCyycaSK+onMhoqGQWGeIOmZNCk5yiswCTyDwv1djE2dHhAPousm0m8isaXJguD1+E+cglT0nlNmQwi51EQyX5EpZqbDc3VpFs6vYOshP6bZjLjCga8Nqt1+Dz0+pq6Rc0zRjgmcO27iwA8dauFtFwhKDeTbl1dOAIr9XPD+ihj+mnDd9PhrtXgr/8t97c+N4A4lI6QmOfEExJXQ2OahN8UBZ2ohu5hnECYqgqMuVb9P/qB6KzkyGfRGXwDLH7mBQ79pGjmdwwEvdqvBFYGdV1JavQnLoUM0=) 2025-08-29 18:59:26.225426 | orchestrator | 2025-08-29 18:59:26.225437 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 18:59:26.225448 | orchestrator | Friday 29 August 2025 18:59:25 +0000 (0:00:01.032) 0:00:11.575 ********* 2025-08-29 18:59:26.225466 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMDNa41g3IhZxwgob75OiFzKg7vO79ccFtp/UmRWI2816nC9g9P1wbYDIv/oGwUlMPOC5wkHCM7AxkwWZ0ebUo0=) 2025-08-29 18:59:37.429209 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOG13oPybVbrTQoPxNqzsfcW1u4iOS15Fh6/LMfu/Pw+) 2025-08-29 18:59:37.429353 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6Ka/pwaNHv76fQoUpd06I5+TymNB5m+DUwx3tPYVmeOwLANLdt9BqREciRODAm0D2lpy3yprGjhvlsD4EMR09n06ElV5TQd4G2l63xeulo++g7ExIRleWd3CjKs+Fa8jFf+orDHXcWEdGCVhIeCTV6caxAKNSsVXUSbgKqsg+czwYLRyihutU6Uj0iQXE+g5Li0YUdEpz700HKZZxOK4AbIR4YlPBR6XaIvCm9iQk2uKcyXu3Eum/ar0NU7lxK/snyuNZRebV9Yp/AJ/Ra1wEC4Ttp+YTPzflm3O52W2R+hMAsH37+oGkGKi9PgYxZjcdnY7N2i7MkXTnHuNho4I4hbybEHYQZ+YKWiMujOGcNfmrp6gNRipSMLX25Sjrls0qCtsWftaHR7UKRZz924za01qxRinJ9wuxvKpGivTxeV8UzLlin8hQXZstRNnpamQPhj3xW69y8oPdh78fhSkO3nO7gqYGrLh7utOya/tRZqsiOTI1QZT+O1qQs5dW4T0=) 2025-08-29 18:59:37.429374 | orchestrator | 2025-08-29 18:59:37.429388 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 18:59:37.429400 | orchestrator | Friday 29 August 2025 18:59:26 +0000 (0:00:01.022) 0:00:12.598 ********* 2025-08-29 18:59:37.429412 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCr6sK3GIFKNkE2ec8rFyAlTxiaRv6Iqa6jBUUn+5B31cfGp/m4vKntAdSffdDaZtUkXZe71crFZjzRgCX0rtg608e3wY8oWISqTK86F+Ne1JlR968YS8tahxDWnqb+SFHaZMZTnz9wg0xEMfdO3AlkhMd7qVvL+xK5h4jgyio92zKgFbV53N9Gjh1S744ArVpseE/Yy374D7n0wfbU2oz2NKYm/u6YLeU/KTZJjBydgfiSHIWCLziQSr9DRX0V71vMfuEbpTHu7pHTuRwmhFkI84/VGGFl1mrj+tZw4vDyyjyZUlmkjNwKf61N64iE7oJWAUeASkJ8sGPZ001CWAgU3wdgfIH4fGpD38OQx46iUu1nFiI1DGFQYHyDzKkFoMKBduK076uEzUSxrzM8M+zHis8lPbn5FQmuU04BEOgL0hXehbwsfJwbghMPTIFgQ8xs8aykyvKEuCQXGY++2TiYyEuho21xQn+oMc5vIBJkT542viUnpZR7hnLE7lQaC2E=) 2025-08-29 18:59:37.429424 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMzulrlG/PPPLvtLMYogH/t1YB9gbUumD6D11qw9vGAnWXMLiZr4ytLglZ4Xhla0IXzLvAuA8E1Jnhu2/CTUbO4=) 2025-08-29 18:59:37.429437 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGsRR0S2NOQoQTsX66w1y+lQDL8HSgMEctZ4jdwIl7mM) 2025-08-29 18:59:37.429448 | orchestrator | 2025-08-29 18:59:37.429468 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-08-29 18:59:37.429489 | orchestrator | Friday 29 August 2025 18:59:27 +0000 (0:00:01.133) 0:00:13.731 ********* 2025-08-29 18:59:37.429511 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-08-29 18:59:37.429530 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-08-29 18:59:37.429548 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-08-29 18:59:37.429565 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-08-29 18:59:37.429595 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-08-29 18:59:37.429626 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-08-29 18:59:37.429646 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-08-29 18:59:37.429665 | orchestrator | 2025-08-29 18:59:37.429704 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-08-29 18:59:37.429727 | orchestrator | Friday 29 August 2025 18:59:32 +0000 (0:00:05.516) 0:00:19.247 ********* 2025-08-29 18:59:37.429749 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-08-29 18:59:37.429771 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-08-29 18:59:37.429792 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-08-29 18:59:37.429809 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-08-29 18:59:37.429835 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-08-29 18:59:37.429845 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-08-29 18:59:37.429856 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-08-29 18:59:37.429867 | orchestrator | 2025-08-29 18:59:37.429897 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 18:59:37.429909 | orchestrator | Friday 29 August 2025 18:59:33 +0000 (0:00:00.178) 0:00:19.425 ********* 2025-08-29 18:59:37.429920 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE5rYImg4crqCilW6/N2sOhjTur808nYO1blHbdI5nms) 2025-08-29 18:59:37.429932 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDFoa9BYe0omF1creLArrRs6nZ7GuT56UzYB/l9mBGsbt40ekQWgEb0RVYZiMOeWeohUFktJuJEf5S7wkWP05g+ZW19CvgwymtG4mpgjZ1wm7r4L049zlRR5k9mK1WNmmF7dIYSlLZ49MPKYaof9Eq7sMq/FJDYCq3cFNOm8PSTFE1S221XSlC/ZPDJKv8HuLso5XHGn2EKY7pdMJNm2Jb/lyY4trKoQhSrf1V2m8KeZVBzO00prLsjDJM5ZaYavHRKMu1InNLMk4B3nzlpblGiR4Oxph7VFhACkOzzwQi5h5rfM2HF1NXddIaNPOGZYACyMwYpWrFqggZHtwWCi5e1Qx9Qhx5bzZsDa67LPYtvIcRjBLs977neQHbW6rwiYabVTOFiOH8GVbH9vvcBKGQwtivOMy8joPW973UAvap0akfHiHeZ8XBZdnHlKnr+SA+djjYwGYtBQ7EvqX+HBJmp2jqIfLCyH5RRUXs9SUNogrdt/Gu5u18/YqLY88/4ASU=) 2025-08-29 18:59:37.429944 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKauBHDcgeNuVxHcw0R+G8f0cd7LDz6k/R9C8kbQRWhVM/adcTHu6vL4luHL25FIdJNAsAfEtEqVRl0I8/6+saM=) 2025-08-29 18:59:37.429955 | orchestrator | 2025-08-29 18:59:37.429966 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 18:59:37.429977 | orchestrator | Friday 29 August 2025 18:59:34 +0000 (0:00:01.114) 0:00:20.540 ********* 2025-08-29 18:59:37.429988 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG+fC7P7nXYhqP5f9NMlNId7kLs6cwqpCe6K2cQDEKBp) 2025-08-29 18:59:37.429999 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuaM3SdD0iLsOYjVBHe1xI7OOnW9xUiM8oYQrkRlnT8SKdicnIDpEPy9+IvRcw8858AHSccX31ADNjluRFTKNlWzOtR+7HS/4CshWUcLEz/9dlo5OiNkRJsSy/3pIrwoItLiLkrgKg1gZ7aiMVD5cagQS11CJf2iiIOHx+7eJpGruW4zaHad9hsj7r7gjl9E0KbxGzPy+btaJSpG8wxzt0Sis88iVd6+wt11DtVCt0zKn88FJGl/fY406zuHHqa9Wc3OLEF52eTWkGjpfZrgcZ975XBltvD3gDt0a5p6aojHJj1MWCTJs+EjP1Pini+vUBYZo2H1jLPrsOWpghhhR7PjJibMzkeamKtljD63QVqQuY+9A5Dj9tegiZsqWjlzon3hjvzgfy006c2IeZd0/yRi38xalvduZ/Bbiep9Jnn517UkX/yIcTbTjRpSai2lKvJAE+3hrJUx3PDxMYA3rWuGNOu7ngZs15XNmGGf90YnqdZ2vJHGBR4JfYYHAzJw0=) 2025-08-29 18:59:37.430010 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDQz1iaKdURDZn9FXVLlgjyWFm8KIRZiMepDjhkjMbVr93ecEjX+E8qTIgbmrcQ8eeKiYdk7JcuMh1jOFA10pfA=) 2025-08-29 18:59:37.430108 | orchestrator | 2025-08-29 18:59:37.430147 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 18:59:37.430159 | orchestrator | Friday 29 August 2025 18:59:35 +0000 (0:00:01.115) 0:00:21.656 ********* 2025-08-29 18:59:37.430170 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEL+f2TiXb3DRPx3/rrLuUq+O62jYKwoA1ruyyMOZPZ8nC22St6qOzMMqnevcvi1uOH0vPxRKakAnXsO2M69NQc=) 2025-08-29 18:59:37.430183 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCOW/zRwjzSXKZSgJYk7tMp2I1O1wdNgtkWS+9Rss+puu2v3L4FP8dYVX0o6VkUXsbYDRN6hgmryFM3XGZW8ACRgYiL2yQUeCGU7E+ymu0lunNK7NciGTDEuUCgzNwOmq0ICvsA+MBLo1W3DcTv2sX4EADG6/R8H61aohZb3MrBP5Osg6fgteWPv7dBM+C82dgXWFlOBDgjPPAvra8g2vbgRN7zj4yFNTeBOixnaSpDbNJCmfOtumRVA+0RKfyTjjEBrZSgKM2RGD5JWmXa7VuyhwiKs8+vEN9k5y+iEqPqlf2/DtlifEn3p/vXNAsSPG/eA0jbIQStmRXpxcN1u0YDQfQgEW/TzdNceO0aJvdEP5bQnposW4A9RCEqQeza4mi6gaYRd7FhLfSogB4R257mFffHEZzfdq87WCzzIGCJSBO5iVHlaCaXSXWy3FxbTxNMiy5aVpOyqUq8jd9xRQ28pzwF/GCUNC5YTlJGPpELu4ahvcv7YWaAyfywC3+WKq8=) 2025-08-29 18:59:37.430203 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILdTezX3alAxpXLNwuU9OTLXUONy5HwavvEjoTCj+Qzu) 2025-08-29 18:59:37.430214 | orchestrator | 2025-08-29 18:59:37.430224 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 18:59:37.430235 | orchestrator | Friday 29 August 2025 18:59:36 +0000 (0:00:01.063) 0:00:22.719 ********* 2025-08-29 18:59:37.430270 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDFaVoaZ5M/1aGr7CTLLAYrKpCuroERfKiz6d7qss1VRIKS8270ZOYuH+ccSZU5ww3AHPAJc05KuynjtOXVI4IfFTxRq+taX08xS0c5rXnomJ3x9aXCAu6BbzIGz2YU8YaSeQNV66NNwQEbYY+43jY7KxONt6GEfXSdd/D7wxQzFNJvI1C7f/9JnJGTwtzMBnYSf5efLTAokIjz1Yzy/ZJacnFv1euqKxqlsrm/e4AgKA4YguE7zYxmBcyO8FHn+5PG7bOm4BGvx/98h0fueagiPbjzs4VaKaGmpaFpRK83lOlFPnQIDhjQb0jncdN8mKX6hvv/R9/1bIhKTpwWzFYrKG2WJ/tMTNmZp+gQBC3mVY0zp2zg3ZVHBGmL4+4OYYf/JEuqly/HZLEK4LJrcxYCOkE1ybtQAL5oJ1Y7bgnm4JWxcO7jTCzGueWJjulopXfWSexLO+wKwMgm8X8gXryasbdyKmVX4MSO41D2ZPAPmxqOUDgmh4DB8lW0RIu0Trc=) 2025-08-29 18:59:41.737004 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBL0pb7yQVZPzQaAckfuEk7EQAKUtVK9QgLyRgeq4LTp3uvUyO0djX35wptGLhMJE85qPBzMfYKBPcPGDggUm7Sg=) 2025-08-29 18:59:41.737143 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDtV4j9wpXveCZvgF/XCd9wqNFkFi4GMl6yVTvxNARJ4) 2025-08-29 18:59:41.737162 | orchestrator | 2025-08-29 18:59:41.737175 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 18:59:41.737188 | orchestrator | Friday 29 August 2025 18:59:37 +0000 (0:00:01.084) 0:00:23.804 ********* 2025-08-29 18:59:41.737200 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLFJtB5ChNlN/Pr3DXbQANybmWgR71HiZZIeGjOSjCRJbmDpfFBRm3U4gQMPXjQiNFxXxDf/6nrwlM0Fjjyox9E=) 2025-08-29 18:59:41.737213 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUDzSk68lkqEZ+KfPCjdETP/wwtOJEJKoCaoMysvKUMq0fnJW5fBqUtpeq+Itq8CZ1Yg5Uk6syqmz4sObJ/AYTgrCV/MgXYlLzTu3X/KzPp8/KFUDbw1IgHdrfZ9azYk8QNDUTYuZPDdytKd32ZLx2qkWxEJ+zvEwiCW6JAKugxbh1G45luUP52387YLf3hwueu3AO6x271pkfgs96pH5xNAhl3g/rS6XiCyycaSK+onMhoqGQWGeIOmZNCk5yiswCTyDwv1djE2dHhAPousm0m8isaXJguD1+E+cglT0nlNmQwi51EQyX5EpZqbDc3VpFs6vYOshP6bZjLjCga8Nqt1+Dz0+pq6Rc0zRjgmcO27iwA8dauFtFwhKDeTbl1dOAIr9XPD+ihj+mnDd9PhrtXgr/8t97c+N4A4lI6QmOfEExJXQ2OahN8UBZ2ohu5hnECYqgqMuVb9P/qB6KzkyGfRGXwDLH7mBQ79pGjmdwwEvdqvBFYGdV1JavQnLoUM0=) 2025-08-29 18:59:41.737227 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDEyO+lXviHlQ7wqG6ZY5Gn2SOzYpsCQg3oCYRrWyp5P) 2025-08-29 18:59:41.737238 | orchestrator | 2025-08-29 18:59:41.737249 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 18:59:41.737260 | orchestrator | Friday 29 August 2025 18:59:38 +0000 (0:00:01.104) 0:00:24.909 ********* 2025-08-29 18:59:41.737271 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOG13oPybVbrTQoPxNqzsfcW1u4iOS15Fh6/LMfu/Pw+) 2025-08-29 18:59:41.737282 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6Ka/pwaNHv76fQoUpd06I5+TymNB5m+DUwx3tPYVmeOwLANLdt9BqREciRODAm0D2lpy3yprGjhvlsD4EMR09n06ElV5TQd4G2l63xeulo++g7ExIRleWd3CjKs+Fa8jFf+orDHXcWEdGCVhIeCTV6caxAKNSsVXUSbgKqsg+czwYLRyihutU6Uj0iQXE+g5Li0YUdEpz700HKZZxOK4AbIR4YlPBR6XaIvCm9iQk2uKcyXu3Eum/ar0NU7lxK/snyuNZRebV9Yp/AJ/Ra1wEC4Ttp+YTPzflm3O52W2R+hMAsH37+oGkGKi9PgYxZjcdnY7N2i7MkXTnHuNho4I4hbybEHYQZ+YKWiMujOGcNfmrp6gNRipSMLX25Sjrls0qCtsWftaHR7UKRZz924za01qxRinJ9wuxvKpGivTxeV8UzLlin8hQXZstRNnpamQPhj3xW69y8oPdh78fhSkO3nO7gqYGrLh7utOya/tRZqsiOTI1QZT+O1qQs5dW4T0=) 2025-08-29 18:59:41.737318 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMDNa41g3IhZxwgob75OiFzKg7vO79ccFtp/UmRWI2816nC9g9P1wbYDIv/oGwUlMPOC5wkHCM7AxkwWZ0ebUo0=) 2025-08-29 18:59:41.737330 | orchestrator | 2025-08-29 18:59:41.737341 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 18:59:41.737352 | orchestrator | Friday 29 August 2025 18:59:39 +0000 (0:00:01.068) 0:00:25.978 ********* 2025-08-29 18:59:41.737363 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCr6sK3GIFKNkE2ec8rFyAlTxiaRv6Iqa6jBUUn+5B31cfGp/m4vKntAdSffdDaZtUkXZe71crFZjzRgCX0rtg608e3wY8oWISqTK86F+Ne1JlR968YS8tahxDWnqb+SFHaZMZTnz9wg0xEMfdO3AlkhMd7qVvL+xK5h4jgyio92zKgFbV53N9Gjh1S744ArVpseE/Yy374D7n0wfbU2oz2NKYm/u6YLeU/KTZJjBydgfiSHIWCLziQSr9DRX0V71vMfuEbpTHu7pHTuRwmhFkI84/VGGFl1mrj+tZw4vDyyjyZUlmkjNwKf61N64iE7oJWAUeASkJ8sGPZ001CWAgU3wdgfIH4fGpD38OQx46iUu1nFiI1DGFQYHyDzKkFoMKBduK076uEzUSxrzM8M+zHis8lPbn5FQmuU04BEOgL0hXehbwsfJwbghMPTIFgQ8xs8aykyvKEuCQXGY++2TiYyEuho21xQn+oMc5vIBJkT542viUnpZR7hnLE7lQaC2E=) 2025-08-29 18:59:41.737374 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMzulrlG/PPPLvtLMYogH/t1YB9gbUumD6D11qw9vGAnWXMLiZr4ytLglZ4Xhla0IXzLvAuA8E1Jnhu2/CTUbO4=) 2025-08-29 18:59:41.737386 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGsRR0S2NOQoQTsX66w1y+lQDL8HSgMEctZ4jdwIl7mM) 2025-08-29 18:59:41.737396 | orchestrator | 2025-08-29 18:59:41.737407 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-08-29 18:59:41.737418 | orchestrator | Friday 29 August 2025 18:59:40 +0000 (0:00:01.129) 0:00:27.107 ********* 2025-08-29 18:59:41.737429 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-08-29 18:59:41.737441 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-08-29 18:59:41.737470 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-08-29 18:59:41.737482 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-08-29 18:59:41.737493 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-08-29 18:59:41.737503 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-08-29 18:59:41.737514 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-08-29 18:59:41.737525 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:59:41.737537 | orchestrator | 2025-08-29 18:59:41.737549 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-08-29 18:59:41.737562 | orchestrator | Friday 29 August 2025 18:59:40 +0000 (0:00:00.165) 0:00:27.273 ********* 2025-08-29 18:59:41.737574 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:59:41.737586 | orchestrator | 2025-08-29 18:59:41.737598 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-08-29 18:59:41.737610 | orchestrator | Friday 29 August 2025 18:59:40 +0000 (0:00:00.062) 0:00:27.335 ********* 2025-08-29 18:59:41.737623 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:59:41.737635 | orchestrator | 2025-08-29 18:59:41.737647 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-08-29 18:59:41.737658 | orchestrator | Friday 29 August 2025 18:59:41 +0000 (0:00:00.061) 0:00:27.396 ********* 2025-08-29 18:59:41.737670 | orchestrator | changed: [testbed-manager] 2025-08-29 18:59:41.737682 | orchestrator | 2025-08-29 18:59:41.737694 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 18:59:41.737707 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 18:59:41.737741 | orchestrator | 2025-08-29 18:59:41.737760 | orchestrator | 2025-08-29 18:59:41.737780 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 18:59:41.737810 | orchestrator | Friday 29 August 2025 18:59:41 +0000 (0:00:00.483) 0:00:27.880 ********* 2025-08-29 18:59:41.737823 | orchestrator | =============================================================================== 2025-08-29 18:59:41.737834 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.06s 2025-08-29 18:59:41.737846 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.52s 2025-08-29 18:59:41.737875 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2025-08-29 18:59:41.737888 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2025-08-29 18:59:41.737899 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-08-29 18:59:41.737909 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-08-29 18:59:41.737920 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-08-29 18:59:41.737930 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-08-29 18:59:41.737941 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-08-29 18:59:41.737951 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-08-29 18:59:41.737962 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-08-29 18:59:41.737973 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-08-29 18:59:41.737983 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-08-29 18:59:41.737994 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-08-29 18:59:41.738004 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-08-29 18:59:41.738015 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.96s 2025-08-29 18:59:41.738081 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.48s 2025-08-29 18:59:41.738122 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2025-08-29 18:59:41.738134 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2025-08-29 18:59:41.738145 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2025-08-29 18:59:42.030262 | orchestrator | + osism apply squid 2025-08-29 18:59:54.085566 | orchestrator | 2025-08-29 18:59:54 | INFO  | Task 0c3072a7-a6f5-4886-ab36-07dd6de347df (squid) was prepared for execution. 2025-08-29 18:59:54.085677 | orchestrator | 2025-08-29 18:59:54 | INFO  | It takes a moment until task 0c3072a7-a6f5-4886-ab36-07dd6de347df (squid) has been started and output is visible here. 2025-08-29 19:01:51.620448 | orchestrator | 2025-08-29 19:01:51.620565 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-08-29 19:01:51.620583 | orchestrator | 2025-08-29 19:01:51.620595 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-08-29 19:01:51.620606 | orchestrator | Friday 29 August 2025 18:59:58 +0000 (0:00:00.183) 0:00:00.183 ********* 2025-08-29 19:01:51.620618 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 19:01:51.620630 | orchestrator | 2025-08-29 19:01:51.620641 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-08-29 19:01:51.620670 | orchestrator | Friday 29 August 2025 18:59:58 +0000 (0:00:00.102) 0:00:00.286 ********* 2025-08-29 19:01:51.620682 | orchestrator | ok: [testbed-manager] 2025-08-29 19:01:51.620694 | orchestrator | 2025-08-29 19:01:51.620705 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-08-29 19:01:51.620716 | orchestrator | Friday 29 August 2025 18:59:59 +0000 (0:00:01.457) 0:00:01.744 ********* 2025-08-29 19:01:51.620727 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-08-29 19:01:51.620761 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-08-29 19:01:51.620773 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-08-29 19:01:51.620783 | orchestrator | 2025-08-29 19:01:51.620794 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-08-29 19:01:51.620805 | orchestrator | Friday 29 August 2025 19:00:00 +0000 (0:00:01.170) 0:00:02.914 ********* 2025-08-29 19:01:51.620815 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-08-29 19:01:51.620826 | orchestrator | 2025-08-29 19:01:51.620837 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-08-29 19:01:51.620847 | orchestrator | Friday 29 August 2025 19:00:01 +0000 (0:00:01.096) 0:00:04.011 ********* 2025-08-29 19:01:51.620858 | orchestrator | ok: [testbed-manager] 2025-08-29 19:01:51.620868 | orchestrator | 2025-08-29 19:01:51.620879 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-08-29 19:01:51.620890 | orchestrator | Friday 29 August 2025 19:00:02 +0000 (0:00:00.364) 0:00:04.375 ********* 2025-08-29 19:01:51.620900 | orchestrator | changed: [testbed-manager] 2025-08-29 19:01:51.620911 | orchestrator | 2025-08-29 19:01:51.620922 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-08-29 19:01:51.620933 | orchestrator | Friday 29 August 2025 19:00:03 +0000 (0:00:00.959) 0:00:05.335 ********* 2025-08-29 19:01:51.620943 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-08-29 19:01:51.620954 | orchestrator | ok: [testbed-manager] 2025-08-29 19:01:51.620965 | orchestrator | 2025-08-29 19:01:51.620975 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-08-29 19:01:51.620986 | orchestrator | Friday 29 August 2025 19:00:38 +0000 (0:00:35.355) 0:00:40.690 ********* 2025-08-29 19:01:51.620997 | orchestrator | changed: [testbed-manager] 2025-08-29 19:01:51.621007 | orchestrator | 2025-08-29 19:01:51.621018 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-08-29 19:01:51.621029 | orchestrator | Friday 29 August 2025 19:00:50 +0000 (0:00:11.991) 0:00:52.682 ********* 2025-08-29 19:01:51.621039 | orchestrator | Pausing for 60 seconds 2025-08-29 19:01:51.621050 | orchestrator | changed: [testbed-manager] 2025-08-29 19:01:51.621061 | orchestrator | 2025-08-29 19:01:51.621071 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-08-29 19:01:51.621082 | orchestrator | Friday 29 August 2025 19:01:50 +0000 (0:01:00.088) 0:01:52.770 ********* 2025-08-29 19:01:51.621093 | orchestrator | ok: [testbed-manager] 2025-08-29 19:01:51.621129 | orchestrator | 2025-08-29 19:01:51.621140 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-08-29 19:01:51.621151 | orchestrator | Friday 29 August 2025 19:01:50 +0000 (0:00:00.065) 0:01:52.835 ********* 2025-08-29 19:01:51.621161 | orchestrator | changed: [testbed-manager] 2025-08-29 19:01:51.621172 | orchestrator | 2025-08-29 19:01:51.621182 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:01:51.621193 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:01:51.621204 | orchestrator | 2025-08-29 19:01:51.621214 | orchestrator | 2025-08-29 19:01:51.621225 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:01:51.621236 | orchestrator | Friday 29 August 2025 19:01:51 +0000 (0:00:00.627) 0:01:53.463 ********* 2025-08-29 19:01:51.621247 | orchestrator | =============================================================================== 2025-08-29 19:01:51.621257 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2025-08-29 19:01:51.621267 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 35.36s 2025-08-29 19:01:51.621278 | orchestrator | osism.services.squid : Restart squid service --------------------------- 11.99s 2025-08-29 19:01:51.621288 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.46s 2025-08-29 19:01:51.621308 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.17s 2025-08-29 19:01:51.621319 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.10s 2025-08-29 19:01:51.621330 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.96s 2025-08-29 19:01:51.621340 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.63s 2025-08-29 19:01:51.621351 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.36s 2025-08-29 19:01:51.621362 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2025-08-29 19:01:51.621372 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-08-29 19:01:51.959567 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-08-29 19:01:51.960318 | orchestrator | ++ semver latest 9.0.0 2025-08-29 19:01:52.022583 | orchestrator | + [[ -1 -lt 0 ]] 2025-08-29 19:01:52.022611 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-08-29 19:01:52.023347 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-08-29 19:02:04.005092 | orchestrator | 2025-08-29 19:02:03 | INFO  | Task 3fd9f6da-4d13-456a-be05-0bbb404c2581 (operator) was prepared for execution. 2025-08-29 19:02:04.005246 | orchestrator | 2025-08-29 19:02:04 | INFO  | It takes a moment until task 3fd9f6da-4d13-456a-be05-0bbb404c2581 (operator) has been started and output is visible here. 2025-08-29 19:02:19.672674 | orchestrator | 2025-08-29 19:02:19.672792 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-08-29 19:02:19.672809 | orchestrator | 2025-08-29 19:02:19.672821 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 19:02:19.672852 | orchestrator | Friday 29 August 2025 19:02:07 +0000 (0:00:00.148) 0:00:00.148 ********* 2025-08-29 19:02:19.672864 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:02:19.672877 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:02:19.672889 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:02:19.672907 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:02:19.672926 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:02:19.672943 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:02:19.672961 | orchestrator | 2025-08-29 19:02:19.672978 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-08-29 19:02:19.672995 | orchestrator | Friday 29 August 2025 19:02:11 +0000 (0:00:03.463) 0:00:03.611 ********* 2025-08-29 19:02:19.673014 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:02:19.673031 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:02:19.673050 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:02:19.673069 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:02:19.673087 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:02:19.673165 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:02:19.673178 | orchestrator | 2025-08-29 19:02:19.673190 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-08-29 19:02:19.673201 | orchestrator | 2025-08-29 19:02:19.673214 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-08-29 19:02:19.673226 | orchestrator | Friday 29 August 2025 19:02:12 +0000 (0:00:00.799) 0:00:04.411 ********* 2025-08-29 19:02:19.673239 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:02:19.673252 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:02:19.673264 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:02:19.673277 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:02:19.673289 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:02:19.673302 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:02:19.673315 | orchestrator | 2025-08-29 19:02:19.673327 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-08-29 19:02:19.673340 | orchestrator | Friday 29 August 2025 19:02:12 +0000 (0:00:00.176) 0:00:04.588 ********* 2025-08-29 19:02:19.673352 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:02:19.673364 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:02:19.673376 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:02:19.673387 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:02:19.673399 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:02:19.673435 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:02:19.673447 | orchestrator | 2025-08-29 19:02:19.673460 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-08-29 19:02:19.673472 | orchestrator | Friday 29 August 2025 19:02:12 +0000 (0:00:00.171) 0:00:04.759 ********* 2025-08-29 19:02:19.673485 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:02:19.673499 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:02:19.673511 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:02:19.673523 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:02:19.673535 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:02:19.673548 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:02:19.673561 | orchestrator | 2025-08-29 19:02:19.673572 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-08-29 19:02:19.673609 | orchestrator | Friday 29 August 2025 19:02:13 +0000 (0:00:00.596) 0:00:05.356 ********* 2025-08-29 19:02:19.673635 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:02:19.673646 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:02:19.673656 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:02:19.673667 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:02:19.673678 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:02:19.673688 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:02:19.673699 | orchestrator | 2025-08-29 19:02:19.673709 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-08-29 19:02:19.673720 | orchestrator | Friday 29 August 2025 19:02:13 +0000 (0:00:00.814) 0:00:06.170 ********* 2025-08-29 19:02:19.673731 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-08-29 19:02:19.673742 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-08-29 19:02:19.673753 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-08-29 19:02:19.673764 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-08-29 19:02:19.673774 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-08-29 19:02:19.673785 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-08-29 19:02:19.673795 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-08-29 19:02:19.673806 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-08-29 19:02:19.673817 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-08-29 19:02:19.673827 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-08-29 19:02:19.673838 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-08-29 19:02:19.673848 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-08-29 19:02:19.673859 | orchestrator | 2025-08-29 19:02:19.673870 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-08-29 19:02:19.673880 | orchestrator | Friday 29 August 2025 19:02:15 +0000 (0:00:01.176) 0:00:07.347 ********* 2025-08-29 19:02:19.673891 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:02:19.673902 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:02:19.673912 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:02:19.673923 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:02:19.673933 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:02:19.673944 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:02:19.673955 | orchestrator | 2025-08-29 19:02:19.673965 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-08-29 19:02:19.673977 | orchestrator | Friday 29 August 2025 19:02:16 +0000 (0:00:01.189) 0:00:08.537 ********* 2025-08-29 19:02:19.673988 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-08-29 19:02:19.673998 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-08-29 19:02:19.674009 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-08-29 19:02:19.674080 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 19:02:19.674130 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 19:02:19.674153 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 19:02:19.674174 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 19:02:19.674185 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 19:02:19.674195 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 19:02:19.674206 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-08-29 19:02:19.674216 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-08-29 19:02:19.674227 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-08-29 19:02:19.674237 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-08-29 19:02:19.674248 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-08-29 19:02:19.674258 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-08-29 19:02:19.674269 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-08-29 19:02:19.674279 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-08-29 19:02:19.674290 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-08-29 19:02:19.674301 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-08-29 19:02:19.674311 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-08-29 19:02:19.674322 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-08-29 19:02:19.674332 | orchestrator | 2025-08-29 19:02:19.674343 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-08-29 19:02:19.674355 | orchestrator | Friday 29 August 2025 19:02:17 +0000 (0:00:01.228) 0:00:09.765 ********* 2025-08-29 19:02:19.674365 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:02:19.674376 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:02:19.674386 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:02:19.674397 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:02:19.674407 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:02:19.674418 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:02:19.674428 | orchestrator | 2025-08-29 19:02:19.674439 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-08-29 19:02:19.674450 | orchestrator | Friday 29 August 2025 19:02:17 +0000 (0:00:00.174) 0:00:09.940 ********* 2025-08-29 19:02:19.674461 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:02:19.674471 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:02:19.674482 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:02:19.674492 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:02:19.674503 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:02:19.674513 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:02:19.674523 | orchestrator | 2025-08-29 19:02:19.674534 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-08-29 19:02:19.674545 | orchestrator | Friday 29 August 2025 19:02:18 +0000 (0:00:00.559) 0:00:10.499 ********* 2025-08-29 19:02:19.674555 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:02:19.674566 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:02:19.674576 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:02:19.674587 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:02:19.674597 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:02:19.674608 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:02:19.674618 | orchestrator | 2025-08-29 19:02:19.674629 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-08-29 19:02:19.674640 | orchestrator | Friday 29 August 2025 19:02:18 +0000 (0:00:00.201) 0:00:10.701 ********* 2025-08-29 19:02:19.674650 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-08-29 19:02:19.674661 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:02:19.674672 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 19:02:19.674683 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 19:02:19.674693 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:02:19.674715 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:02:19.674726 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 19:02:19.674737 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:02:19.674747 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 19:02:19.674758 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:02:19.674769 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-08-29 19:02:19.674779 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:02:19.674790 | orchestrator | 2025-08-29 19:02:19.674801 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-08-29 19:02:19.674812 | orchestrator | Friday 29 August 2025 19:02:19 +0000 (0:00:00.683) 0:00:11.384 ********* 2025-08-29 19:02:19.674822 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:02:19.674833 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:02:19.674843 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:02:19.674854 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:02:19.674865 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:02:19.674875 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:02:19.674886 | orchestrator | 2025-08-29 19:02:19.674905 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-08-29 19:02:19.674916 | orchestrator | Friday 29 August 2025 19:02:19 +0000 (0:00:00.165) 0:00:11.550 ********* 2025-08-29 19:02:19.674927 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:02:19.674937 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:02:19.674948 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:02:19.674958 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:02:19.674969 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:02:19.674980 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:02:19.674990 | orchestrator | 2025-08-29 19:02:19.675001 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-08-29 19:02:19.675012 | orchestrator | Friday 29 August 2025 19:02:19 +0000 (0:00:00.195) 0:00:11.745 ********* 2025-08-29 19:02:19.675022 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:02:19.675033 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:02:19.675044 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:02:19.675054 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:02:19.675072 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:02:20.751573 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:02:20.751697 | orchestrator | 2025-08-29 19:02:20.751733 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-08-29 19:02:20.751747 | orchestrator | Friday 29 August 2025 19:02:19 +0000 (0:00:00.162) 0:00:11.908 ********* 2025-08-29 19:02:20.751758 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:02:20.751769 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:02:20.751780 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:02:20.751791 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:02:20.751802 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:02:20.751813 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:02:20.751823 | orchestrator | 2025-08-29 19:02:20.751834 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-08-29 19:02:20.751845 | orchestrator | Friday 29 August 2025 19:02:20 +0000 (0:00:00.633) 0:00:12.542 ********* 2025-08-29 19:02:20.751856 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:02:20.751867 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:02:20.751878 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:02:20.751889 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:02:20.751899 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:02:20.751910 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:02:20.751920 | orchestrator | 2025-08-29 19:02:20.751931 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:02:20.751943 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 19:02:20.751977 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 19:02:20.751989 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 19:02:20.752000 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 19:02:20.752011 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 19:02:20.752021 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 19:02:20.752032 | orchestrator | 2025-08-29 19:02:20.752043 | orchestrator | 2025-08-29 19:02:20.752054 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:02:20.752065 | orchestrator | Friday 29 August 2025 19:02:20 +0000 (0:00:00.213) 0:00:12.756 ********* 2025-08-29 19:02:20.752076 | orchestrator | =============================================================================== 2025-08-29 19:02:20.752087 | orchestrator | Gathering Facts --------------------------------------------------------- 3.46s 2025-08-29 19:02:20.752098 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.23s 2025-08-29 19:02:20.752141 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.19s 2025-08-29 19:02:20.752153 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.18s 2025-08-29 19:02:20.752166 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.81s 2025-08-29 19:02:20.752178 | orchestrator | Do not require tty for all users ---------------------------------------- 0.80s 2025-08-29 19:02:20.752190 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.68s 2025-08-29 19:02:20.752202 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.63s 2025-08-29 19:02:20.752214 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.60s 2025-08-29 19:02:20.752226 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.56s 2025-08-29 19:02:20.752238 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.21s 2025-08-29 19:02:20.752250 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.20s 2025-08-29 19:02:20.752262 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.20s 2025-08-29 19:02:20.752273 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.18s 2025-08-29 19:02:20.752286 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.17s 2025-08-29 19:02:20.752298 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.17s 2025-08-29 19:02:20.752310 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.17s 2025-08-29 19:02:20.752323 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2025-08-29 19:02:21.057084 | orchestrator | + osism apply --environment custom facts 2025-08-29 19:02:22.878843 | orchestrator | 2025-08-29 19:02:22 | INFO  | Trying to run play facts in environment custom 2025-08-29 19:02:32.998432 | orchestrator | 2025-08-29 19:02:32 | INFO  | Task 17f1d00f-d8b1-4733-80ca-270be8836b5d (facts) was prepared for execution. 2025-08-29 19:02:32.998549 | orchestrator | 2025-08-29 19:02:32 | INFO  | It takes a moment until task 17f1d00f-d8b1-4733-80ca-270be8836b5d (facts) has been started and output is visible here. 2025-08-29 19:03:13.808725 | orchestrator | 2025-08-29 19:03:13.808872 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-08-29 19:03:13.808899 | orchestrator | 2025-08-29 19:03:13.808948 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-08-29 19:03:13.808968 | orchestrator | Friday 29 August 2025 19:02:36 +0000 (0:00:00.102) 0:00:00.102 ********* 2025-08-29 19:03:13.808988 | orchestrator | ok: [testbed-manager] 2025-08-29 19:03:13.809007 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:03:13.809027 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:03:13.809045 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:03:13.809064 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:03:13.809082 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:03:13.809100 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:03:13.809164 | orchestrator | 2025-08-29 19:03:13.809183 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-08-29 19:03:13.809202 | orchestrator | Friday 29 August 2025 19:02:38 +0000 (0:00:01.388) 0:00:01.490 ********* 2025-08-29 19:03:13.809220 | orchestrator | ok: [testbed-manager] 2025-08-29 19:03:13.809239 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:03:13.809260 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:03:13.809281 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:03:13.809301 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:03:13.809321 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:03:13.809341 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:03:13.809361 | orchestrator | 2025-08-29 19:03:13.809381 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-08-29 19:03:13.809401 | orchestrator | 2025-08-29 19:03:13.809422 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-08-29 19:03:13.809443 | orchestrator | Friday 29 August 2025 19:02:39 +0000 (0:00:01.064) 0:00:02.554 ********* 2025-08-29 19:03:13.809464 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:03:13.809485 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:03:13.809505 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:03:13.809525 | orchestrator | 2025-08-29 19:03:13.809546 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-08-29 19:03:13.809567 | orchestrator | Friday 29 August 2025 19:02:39 +0000 (0:00:00.115) 0:00:02.669 ********* 2025-08-29 19:03:13.809588 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:03:13.809607 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:03:13.809626 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:03:13.809644 | orchestrator | 2025-08-29 19:03:13.809662 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-08-29 19:03:13.809680 | orchestrator | Friday 29 August 2025 19:02:39 +0000 (0:00:00.180) 0:00:02.850 ********* 2025-08-29 19:03:13.809699 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:03:13.809717 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:03:13.809735 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:03:13.809753 | orchestrator | 2025-08-29 19:03:13.809772 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-08-29 19:03:13.809791 | orchestrator | Friday 29 August 2025 19:02:39 +0000 (0:00:00.177) 0:00:03.028 ********* 2025-08-29 19:03:13.809811 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:03:13.809831 | orchestrator | 2025-08-29 19:03:13.809850 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-08-29 19:03:13.809868 | orchestrator | Friday 29 August 2025 19:02:39 +0000 (0:00:00.142) 0:00:03.170 ********* 2025-08-29 19:03:13.809887 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:03:13.809905 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:03:13.809923 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:03:13.809941 | orchestrator | 2025-08-29 19:03:13.809960 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-08-29 19:03:13.809979 | orchestrator | Friday 29 August 2025 19:02:40 +0000 (0:00:00.417) 0:00:03.588 ********* 2025-08-29 19:03:13.809997 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:03:13.810088 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:03:13.810150 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:03:13.810172 | orchestrator | 2025-08-29 19:03:13.810231 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-08-29 19:03:13.810252 | orchestrator | Friday 29 August 2025 19:02:40 +0000 (0:00:00.110) 0:00:03.698 ********* 2025-08-29 19:03:13.810272 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:03:13.810293 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:03:13.810312 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:03:13.810332 | orchestrator | 2025-08-29 19:03:13.810350 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-08-29 19:03:13.810370 | orchestrator | Friday 29 August 2025 19:02:41 +0000 (0:00:00.995) 0:00:04.694 ********* 2025-08-29 19:03:13.810389 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:03:13.810409 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:03:13.810427 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:03:13.810446 | orchestrator | 2025-08-29 19:03:13.810464 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-08-29 19:03:13.810484 | orchestrator | Friday 29 August 2025 19:02:41 +0000 (0:00:00.464) 0:00:05.158 ********* 2025-08-29 19:03:13.810503 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:03:13.810522 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:03:13.810543 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:03:13.810562 | orchestrator | 2025-08-29 19:03:13.810603 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-08-29 19:03:13.810623 | orchestrator | Friday 29 August 2025 19:02:43 +0000 (0:00:01.024) 0:00:06.183 ********* 2025-08-29 19:03:13.810642 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:03:13.810661 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:03:13.810680 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:03:13.810700 | orchestrator | 2025-08-29 19:03:13.810719 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-08-29 19:03:13.810739 | orchestrator | Friday 29 August 2025 19:02:58 +0000 (0:00:15.885) 0:00:22.068 ********* 2025-08-29 19:03:13.810758 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:03:13.810778 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:03:13.810797 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:03:13.810817 | orchestrator | 2025-08-29 19:03:13.810835 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-08-29 19:03:13.810878 | orchestrator | Friday 29 August 2025 19:02:58 +0000 (0:00:00.103) 0:00:22.171 ********* 2025-08-29 19:03:13.810899 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:03:13.810918 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:03:13.810944 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:03:13.810962 | orchestrator | 2025-08-29 19:03:13.810981 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-08-29 19:03:13.811000 | orchestrator | Friday 29 August 2025 19:03:05 +0000 (0:00:06.290) 0:00:28.462 ********* 2025-08-29 19:03:13.811018 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:03:13.811036 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:03:13.811054 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:03:13.811072 | orchestrator | 2025-08-29 19:03:13.811092 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-08-29 19:03:13.811135 | orchestrator | Friday 29 August 2025 19:03:05 +0000 (0:00:00.446) 0:00:28.909 ********* 2025-08-29 19:03:13.811155 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-08-29 19:03:13.811173 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-08-29 19:03:13.811192 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-08-29 19:03:13.811211 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-08-29 19:03:13.811229 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-08-29 19:03:13.811249 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-08-29 19:03:13.811267 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-08-29 19:03:13.811298 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-08-29 19:03:13.811315 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-08-29 19:03:13.811331 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-08-29 19:03:13.811348 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-08-29 19:03:13.811364 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-08-29 19:03:13.811380 | orchestrator | 2025-08-29 19:03:13.811397 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-08-29 19:03:13.811413 | orchestrator | Friday 29 August 2025 19:03:09 +0000 (0:00:03.272) 0:00:32.182 ********* 2025-08-29 19:03:13.811430 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:03:13.811446 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:03:13.811462 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:03:13.811479 | orchestrator | 2025-08-29 19:03:13.811495 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 19:03:13.811510 | orchestrator | 2025-08-29 19:03:13.811526 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 19:03:13.811541 | orchestrator | Friday 29 August 2025 19:03:10 +0000 (0:00:01.100) 0:00:33.282 ********* 2025-08-29 19:03:13.811557 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:03:13.811572 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:03:13.811588 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:03:13.811604 | orchestrator | ok: [testbed-manager] 2025-08-29 19:03:13.811619 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:03:13.811634 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:03:13.811650 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:03:13.811665 | orchestrator | 2025-08-29 19:03:13.811681 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:03:13.811698 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:03:13.811714 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:03:13.811731 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:03:13.811748 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:03:13.811765 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 19:03:13.811782 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 19:03:13.811798 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 19:03:13.811815 | orchestrator | 2025-08-29 19:03:13.811832 | orchestrator | 2025-08-29 19:03:13.811848 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:03:13.811864 | orchestrator | Friday 29 August 2025 19:03:13 +0000 (0:00:03.687) 0:00:36.969 ********* 2025-08-29 19:03:13.811881 | orchestrator | =============================================================================== 2025-08-29 19:03:13.811898 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.89s 2025-08-29 19:03:13.811914 | orchestrator | Install required packages (Debian) -------------------------------------- 6.29s 2025-08-29 19:03:13.811931 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.69s 2025-08-29 19:03:13.811947 | orchestrator | Copy fact files --------------------------------------------------------- 3.27s 2025-08-29 19:03:13.811963 | orchestrator | Create custom facts directory ------------------------------------------- 1.39s 2025-08-29 19:03:13.811989 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.10s 2025-08-29 19:03:13.812016 | orchestrator | Copy fact file ---------------------------------------------------------- 1.06s 2025-08-29 19:03:14.073659 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.02s 2025-08-29 19:03:14.073808 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.00s 2025-08-29 19:03:14.073829 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2025-08-29 19:03:14.073841 | orchestrator | Create custom facts directory ------------------------------------------- 0.45s 2025-08-29 19:03:14.073852 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.42s 2025-08-29 19:03:14.073863 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.18s 2025-08-29 19:03:14.073875 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.18s 2025-08-29 19:03:14.073886 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.14s 2025-08-29 19:03:14.073897 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2025-08-29 19:03:14.073908 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2025-08-29 19:03:14.073919 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2025-08-29 19:03:14.402430 | orchestrator | + osism apply bootstrap 2025-08-29 19:03:26.444463 | orchestrator | 2025-08-29 19:03:26 | INFO  | Task 7e341aef-db30-48b9-ad68-300eaf46cdb2 (bootstrap) was prepared for execution. 2025-08-29 19:03:26.444579 | orchestrator | 2025-08-29 19:03:26 | INFO  | It takes a moment until task 7e341aef-db30-48b9-ad68-300eaf46cdb2 (bootstrap) has been started and output is visible here. 2025-08-29 19:03:42.034940 | orchestrator | 2025-08-29 19:03:42.035052 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-08-29 19:03:42.035070 | orchestrator | 2025-08-29 19:03:42.035082 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-08-29 19:03:42.035094 | orchestrator | Friday 29 August 2025 19:03:30 +0000 (0:00:00.166) 0:00:00.167 ********* 2025-08-29 19:03:42.035106 | orchestrator | ok: [testbed-manager] 2025-08-29 19:03:42.035150 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:03:42.035162 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:03:42.035173 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:03:42.035184 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:03:42.035195 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:03:42.035205 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:03:42.035216 | orchestrator | 2025-08-29 19:03:42.035227 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 19:03:42.035238 | orchestrator | 2025-08-29 19:03:42.035249 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 19:03:42.035260 | orchestrator | Friday 29 August 2025 19:03:30 +0000 (0:00:00.206) 0:00:00.373 ********* 2025-08-29 19:03:42.035271 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:03:42.035282 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:03:42.035293 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:03:42.035304 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:03:42.035314 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:03:42.035325 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:03:42.035336 | orchestrator | ok: [testbed-manager] 2025-08-29 19:03:42.035347 | orchestrator | 2025-08-29 19:03:42.035358 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-08-29 19:03:42.035369 | orchestrator | 2025-08-29 19:03:42.035380 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 19:03:42.035390 | orchestrator | Friday 29 August 2025 19:03:34 +0000 (0:00:03.620) 0:00:03.994 ********* 2025-08-29 19:03:42.035402 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-08-29 19:03:42.035413 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-08-29 19:03:42.035447 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-08-29 19:03:42.035458 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-08-29 19:03:42.035469 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 19:03:42.035481 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-08-29 19:03:42.035495 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-08-29 19:03:42.035507 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 19:03:42.035520 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-08-29 19:03:42.035533 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-08-29 19:03:42.035546 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-08-29 19:03:42.035558 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 19:03:42.035571 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-08-29 19:03:42.035583 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-08-29 19:03:42.035596 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-08-29 19:03:42.035610 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-08-29 19:03:42.035622 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-08-29 19:03:42.035635 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-08-29 19:03:42.035647 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-08-29 19:03:42.035660 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:03:42.035673 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-08-29 19:03:42.035687 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-08-29 19:03:42.035700 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-08-29 19:03:42.035712 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-08-29 19:03:42.035724 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-08-29 19:03:42.035738 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-08-29 19:03:42.035750 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-08-29 19:03:42.035763 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-08-29 19:03:42.035774 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-08-29 19:03:42.035785 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:03:42.035795 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 19:03:42.035806 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-08-29 19:03:42.035816 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-08-29 19:03:42.035844 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-08-29 19:03:42.035856 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 19:03:42.035866 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-08-29 19:03:42.035877 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-08-29 19:03:42.035888 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 19:03:42.035898 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:03:42.035909 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-08-29 19:03:42.035920 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-08-29 19:03:42.035930 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:03:42.035941 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-08-29 19:03:42.035952 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-08-29 19:03:42.035962 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:03:42.035973 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-08-29 19:03:42.035984 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-08-29 19:03:42.036021 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-08-29 19:03:42.036034 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-08-29 19:03:42.036045 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-08-29 19:03:42.036056 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-08-29 19:03:42.036066 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:03:42.036077 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-08-29 19:03:42.036088 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-08-29 19:03:42.036098 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-08-29 19:03:42.036109 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:03:42.036150 | orchestrator | 2025-08-29 19:03:42.036162 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-08-29 19:03:42.036173 | orchestrator | 2025-08-29 19:03:42.036183 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-08-29 19:03:42.036194 | orchestrator | Friday 29 August 2025 19:03:34 +0000 (0:00:00.466) 0:00:04.461 ********* 2025-08-29 19:03:42.036205 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:03:42.036216 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:03:42.036227 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:03:42.036237 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:03:42.036248 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:03:42.036258 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:03:42.036269 | orchestrator | ok: [testbed-manager] 2025-08-29 19:03:42.036279 | orchestrator | 2025-08-29 19:03:42.036290 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-08-29 19:03:42.036301 | orchestrator | Friday 29 August 2025 19:03:36 +0000 (0:00:01.268) 0:00:05.729 ********* 2025-08-29 19:03:42.036312 | orchestrator | ok: [testbed-manager] 2025-08-29 19:03:42.036323 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:03:42.036333 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:03:42.036344 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:03:42.036355 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:03:42.036365 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:03:42.036375 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:03:42.036386 | orchestrator | 2025-08-29 19:03:42.036397 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-08-29 19:03:42.036408 | orchestrator | Friday 29 August 2025 19:03:37 +0000 (0:00:01.193) 0:00:06.923 ********* 2025-08-29 19:03:42.036419 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:03:42.036432 | orchestrator | 2025-08-29 19:03:42.036444 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-08-29 19:03:42.036454 | orchestrator | Friday 29 August 2025 19:03:37 +0000 (0:00:00.259) 0:00:07.182 ********* 2025-08-29 19:03:42.036465 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:03:42.036476 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:03:42.036487 | orchestrator | changed: [testbed-manager] 2025-08-29 19:03:42.036498 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:03:42.036508 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:03:42.036519 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:03:42.036530 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:03:42.036541 | orchestrator | 2025-08-29 19:03:42.036552 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-08-29 19:03:42.036562 | orchestrator | Friday 29 August 2025 19:03:39 +0000 (0:00:01.949) 0:00:09.132 ********* 2025-08-29 19:03:42.036573 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:03:42.036585 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:03:42.036605 | orchestrator | 2025-08-29 19:03:42.036616 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-08-29 19:03:42.036627 | orchestrator | Friday 29 August 2025 19:03:39 +0000 (0:00:00.288) 0:00:09.421 ********* 2025-08-29 19:03:42.036637 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:03:42.036648 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:03:42.036659 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:03:42.036669 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:03:42.036685 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:03:42.036696 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:03:42.036706 | orchestrator | 2025-08-29 19:03:42.036717 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-08-29 19:03:42.036728 | orchestrator | Friday 29 August 2025 19:03:40 +0000 (0:00:00.946) 0:00:10.368 ********* 2025-08-29 19:03:42.036739 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:03:42.036749 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:03:42.036760 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:03:42.036770 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:03:42.036781 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:03:42.036791 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:03:42.036802 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:03:42.036813 | orchestrator | 2025-08-29 19:03:42.036824 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-08-29 19:03:42.036835 | orchestrator | Friday 29 August 2025 19:03:41 +0000 (0:00:00.640) 0:00:11.008 ********* 2025-08-29 19:03:42.036845 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:03:42.036856 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:03:42.036866 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:03:42.036877 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:03:42.036887 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:03:42.036898 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:03:42.036908 | orchestrator | ok: [testbed-manager] 2025-08-29 19:03:42.036919 | orchestrator | 2025-08-29 19:03:42.036930 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-08-29 19:03:42.036941 | orchestrator | Friday 29 August 2025 19:03:41 +0000 (0:00:00.512) 0:00:11.521 ********* 2025-08-29 19:03:42.036952 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:03:42.036963 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:03:42.036980 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:03:54.419619 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:03:54.419737 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:03:54.419752 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:03:54.419763 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:03:54.419775 | orchestrator | 2025-08-29 19:03:54.419788 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-08-29 19:03:54.419800 | orchestrator | Friday 29 August 2025 19:03:42 +0000 (0:00:00.287) 0:00:11.809 ********* 2025-08-29 19:03:54.419813 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:03:54.419843 | orchestrator | 2025-08-29 19:03:54.419854 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-08-29 19:03:54.419866 | orchestrator | Friday 29 August 2025 19:03:42 +0000 (0:00:00.314) 0:00:12.124 ********* 2025-08-29 19:03:54.419878 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:03:54.419889 | orchestrator | 2025-08-29 19:03:54.419900 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-08-29 19:03:54.419911 | orchestrator | Friday 29 August 2025 19:03:42 +0000 (0:00:00.309) 0:00:12.433 ********* 2025-08-29 19:03:54.419944 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:03:54.419956 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:03:54.419966 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:03:54.419977 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:03:54.419988 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:03:54.419998 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:03:54.420009 | orchestrator | ok: [testbed-manager] 2025-08-29 19:03:54.420020 | orchestrator | 2025-08-29 19:03:54.420031 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-08-29 19:03:54.420041 | orchestrator | Friday 29 August 2025 19:03:44 +0000 (0:00:01.352) 0:00:13.786 ********* 2025-08-29 19:03:54.420052 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:03:54.420063 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:03:54.420074 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:03:54.420084 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:03:54.420095 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:03:54.420106 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:03:54.420116 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:03:54.420158 | orchestrator | 2025-08-29 19:03:54.420170 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-08-29 19:03:54.420183 | orchestrator | Friday 29 August 2025 19:03:44 +0000 (0:00:00.275) 0:00:14.061 ********* 2025-08-29 19:03:54.420195 | orchestrator | ok: [testbed-manager] 2025-08-29 19:03:54.420207 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:03:54.420219 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:03:54.420231 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:03:54.420243 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:03:54.420255 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:03:54.420267 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:03:54.420279 | orchestrator | 2025-08-29 19:03:54.420292 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-08-29 19:03:54.420305 | orchestrator | Friday 29 August 2025 19:03:44 +0000 (0:00:00.572) 0:00:14.634 ********* 2025-08-29 19:03:54.420317 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:03:54.420329 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:03:54.420341 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:03:54.420353 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:03:54.420365 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:03:54.420377 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:03:54.420389 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:03:54.420402 | orchestrator | 2025-08-29 19:03:54.420415 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-08-29 19:03:54.420429 | orchestrator | Friday 29 August 2025 19:03:45 +0000 (0:00:00.301) 0:00:14.936 ********* 2025-08-29 19:03:54.420441 | orchestrator | ok: [testbed-manager] 2025-08-29 19:03:54.420454 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:03:54.420466 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:03:54.420479 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:03:54.420491 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:03:54.420504 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:03:54.420515 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:03:54.420526 | orchestrator | 2025-08-29 19:03:54.420537 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-08-29 19:03:54.420548 | orchestrator | Friday 29 August 2025 19:03:45 +0000 (0:00:00.671) 0:00:15.607 ********* 2025-08-29 19:03:54.420559 | orchestrator | ok: [testbed-manager] 2025-08-29 19:03:54.420569 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:03:54.420580 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:03:54.420591 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:03:54.420601 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:03:54.420612 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:03:54.420623 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:03:54.420633 | orchestrator | 2025-08-29 19:03:54.420644 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-08-29 19:03:54.420662 | orchestrator | Friday 29 August 2025 19:03:47 +0000 (0:00:01.137) 0:00:16.745 ********* 2025-08-29 19:03:54.420673 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:03:54.420684 | orchestrator | ok: [testbed-manager] 2025-08-29 19:03:54.420695 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:03:54.420705 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:03:54.420716 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:03:54.420726 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:03:54.420737 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:03:54.420748 | orchestrator | 2025-08-29 19:03:54.420759 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-08-29 19:03:54.420770 | orchestrator | Friday 29 August 2025 19:03:48 +0000 (0:00:01.075) 0:00:17.820 ********* 2025-08-29 19:03:54.420799 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:03:54.420811 | orchestrator | 2025-08-29 19:03:54.420822 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-08-29 19:03:54.420833 | orchestrator | Friday 29 August 2025 19:03:48 +0000 (0:00:00.358) 0:00:18.178 ********* 2025-08-29 19:03:54.420844 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:03:54.420855 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:03:54.420865 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:03:54.420876 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:03:54.420887 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:03:54.420898 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:03:54.420908 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:03:54.420919 | orchestrator | 2025-08-29 19:03:54.420930 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-08-29 19:03:54.420940 | orchestrator | Friday 29 August 2025 19:03:49 +0000 (0:00:01.344) 0:00:19.523 ********* 2025-08-29 19:03:54.420951 | orchestrator | ok: [testbed-manager] 2025-08-29 19:03:54.420962 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:03:54.420973 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:03:54.420983 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:03:54.420994 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:03:54.421004 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:03:54.421015 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:03:54.421026 | orchestrator | 2025-08-29 19:03:54.421037 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-08-29 19:03:54.421048 | orchestrator | Friday 29 August 2025 19:03:50 +0000 (0:00:00.257) 0:00:19.780 ********* 2025-08-29 19:03:54.421058 | orchestrator | ok: [testbed-manager] 2025-08-29 19:03:54.421069 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:03:54.421079 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:03:54.421090 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:03:54.421101 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:03:54.421111 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:03:54.421178 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:03:54.421191 | orchestrator | 2025-08-29 19:03:54.421202 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-08-29 19:03:54.421213 | orchestrator | Friday 29 August 2025 19:03:50 +0000 (0:00:00.225) 0:00:20.006 ********* 2025-08-29 19:03:54.421224 | orchestrator | ok: [testbed-manager] 2025-08-29 19:03:54.421234 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:03:54.421245 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:03:54.421256 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:03:54.421266 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:03:54.421277 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:03:54.421287 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:03:54.421298 | orchestrator | 2025-08-29 19:03:54.421309 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-08-29 19:03:54.421320 | orchestrator | Friday 29 August 2025 19:03:50 +0000 (0:00:00.265) 0:00:20.271 ********* 2025-08-29 19:03:54.421339 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:03:54.421352 | orchestrator | 2025-08-29 19:03:54.421363 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-08-29 19:03:54.421374 | orchestrator | Friday 29 August 2025 19:03:50 +0000 (0:00:00.331) 0:00:20.602 ********* 2025-08-29 19:03:54.421384 | orchestrator | ok: [testbed-manager] 2025-08-29 19:03:54.421395 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:03:54.421406 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:03:54.421416 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:03:54.421427 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:03:54.421438 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:03:54.421448 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:03:54.421459 | orchestrator | 2025-08-29 19:03:54.421469 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-08-29 19:03:54.421480 | orchestrator | Friday 29 August 2025 19:03:51 +0000 (0:00:00.533) 0:00:21.135 ********* 2025-08-29 19:03:54.421491 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:03:54.421502 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:03:54.421512 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:03:54.421523 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:03:54.421538 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:03:54.421549 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:03:54.421559 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:03:54.421570 | orchestrator | 2025-08-29 19:03:54.421580 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-08-29 19:03:54.421591 | orchestrator | Friday 29 August 2025 19:03:51 +0000 (0:00:00.265) 0:00:21.401 ********* 2025-08-29 19:03:54.421602 | orchestrator | ok: [testbed-manager] 2025-08-29 19:03:54.421613 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:03:54.421623 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:03:54.421634 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:03:54.421645 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:03:54.421655 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:03:54.421666 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:03:54.421677 | orchestrator | 2025-08-29 19:03:54.421687 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-08-29 19:03:54.421698 | orchestrator | Friday 29 August 2025 19:03:52 +0000 (0:00:01.035) 0:00:22.437 ********* 2025-08-29 19:03:54.421709 | orchestrator | ok: [testbed-manager] 2025-08-29 19:03:54.421720 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:03:54.421730 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:03:54.421741 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:03:54.421752 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:03:54.421762 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:03:54.421773 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:03:54.421784 | orchestrator | 2025-08-29 19:03:54.421795 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-08-29 19:03:54.421806 | orchestrator | Friday 29 August 2025 19:03:53 +0000 (0:00:00.585) 0:00:23.023 ********* 2025-08-29 19:03:54.421816 | orchestrator | ok: [testbed-manager] 2025-08-29 19:03:54.421827 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:03:54.421838 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:03:54.421849 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:03:54.421867 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:04:35.444474 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:04:35.444570 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:04:35.444586 | orchestrator | 2025-08-29 19:04:35.444599 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-08-29 19:04:35.444610 | orchestrator | Friday 29 August 2025 19:03:54 +0000 (0:00:01.052) 0:00:24.075 ********* 2025-08-29 19:04:35.444622 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:04:35.444632 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:04:35.444663 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:04:35.444674 | orchestrator | changed: [testbed-manager] 2025-08-29 19:04:35.444685 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:04:35.444696 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:04:35.444706 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:04:35.444717 | orchestrator | 2025-08-29 19:04:35.444728 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-08-29 19:04:35.444739 | orchestrator | Friday 29 August 2025 19:04:10 +0000 (0:00:16.269) 0:00:40.345 ********* 2025-08-29 19:04:35.444749 | orchestrator | ok: [testbed-manager] 2025-08-29 19:04:35.444760 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:04:35.444770 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:04:35.444781 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:04:35.444791 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:04:35.444802 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:04:35.444812 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:04:35.444822 | orchestrator | 2025-08-29 19:04:35.444833 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-08-29 19:04:35.444844 | orchestrator | Friday 29 August 2025 19:04:10 +0000 (0:00:00.229) 0:00:40.575 ********* 2025-08-29 19:04:35.444855 | orchestrator | ok: [testbed-manager] 2025-08-29 19:04:35.444865 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:04:35.444876 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:04:35.444886 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:04:35.444897 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:04:35.444907 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:04:35.444918 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:04:35.444928 | orchestrator | 2025-08-29 19:04:35.444939 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-08-29 19:04:35.444950 | orchestrator | Friday 29 August 2025 19:04:11 +0000 (0:00:00.260) 0:00:40.836 ********* 2025-08-29 19:04:35.444960 | orchestrator | ok: [testbed-manager] 2025-08-29 19:04:35.444971 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:04:35.444981 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:04:35.444992 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:04:35.445002 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:04:35.445013 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:04:35.445027 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:04:35.445039 | orchestrator | 2025-08-29 19:04:35.445051 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-08-29 19:04:35.445064 | orchestrator | Friday 29 August 2025 19:04:11 +0000 (0:00:00.227) 0:00:41.063 ********* 2025-08-29 19:04:35.445077 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:04:35.445091 | orchestrator | 2025-08-29 19:04:35.445103 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-08-29 19:04:35.445115 | orchestrator | Friday 29 August 2025 19:04:11 +0000 (0:00:00.322) 0:00:41.385 ********* 2025-08-29 19:04:35.445147 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:04:35.445160 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:04:35.445172 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:04:35.445184 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:04:35.445196 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:04:35.445207 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:04:35.445220 | orchestrator | ok: [testbed-manager] 2025-08-29 19:04:35.445232 | orchestrator | 2025-08-29 19:04:35.445245 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-08-29 19:04:35.445257 | orchestrator | Friday 29 August 2025 19:04:13 +0000 (0:00:01.657) 0:00:43.043 ********* 2025-08-29 19:04:35.445269 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:04:35.445281 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:04:35.445294 | orchestrator | changed: [testbed-manager] 2025-08-29 19:04:35.445306 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:04:35.445325 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:04:35.445338 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:04:35.445350 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:04:35.445362 | orchestrator | 2025-08-29 19:04:35.445387 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-08-29 19:04:35.445398 | orchestrator | Friday 29 August 2025 19:04:14 +0000 (0:00:01.182) 0:00:44.225 ********* 2025-08-29 19:04:35.445408 | orchestrator | ok: [testbed-manager] 2025-08-29 19:04:35.445419 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:04:35.445429 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:04:35.445440 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:04:35.445451 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:04:35.445462 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:04:35.445472 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:04:35.445483 | orchestrator | 2025-08-29 19:04:35.445493 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-08-29 19:04:35.445504 | orchestrator | Friday 29 August 2025 19:04:15 +0000 (0:00:00.822) 0:00:45.048 ********* 2025-08-29 19:04:35.445515 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:04:35.445527 | orchestrator | 2025-08-29 19:04:35.445537 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-08-29 19:04:35.445548 | orchestrator | Friday 29 August 2025 19:04:15 +0000 (0:00:00.300) 0:00:45.348 ********* 2025-08-29 19:04:35.445559 | orchestrator | changed: [testbed-manager] 2025-08-29 19:04:35.445569 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:04:35.445580 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:04:35.445591 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:04:35.445601 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:04:35.445612 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:04:35.445622 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:04:35.445633 | orchestrator | 2025-08-29 19:04:35.445658 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-08-29 19:04:35.445670 | orchestrator | Friday 29 August 2025 19:04:16 +0000 (0:00:01.030) 0:00:46.379 ********* 2025-08-29 19:04:35.445681 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:04:35.445691 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:04:35.445702 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:04:35.445713 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:04:35.445723 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:04:35.445734 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:04:35.445744 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:04:35.445755 | orchestrator | 2025-08-29 19:04:35.445766 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-08-29 19:04:35.445776 | orchestrator | Friday 29 August 2025 19:04:17 +0000 (0:00:00.359) 0:00:46.738 ********* 2025-08-29 19:04:35.445787 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:04:35.445798 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:04:35.445808 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:04:35.445818 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:04:35.445829 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:04:35.445839 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:04:35.445850 | orchestrator | changed: [testbed-manager] 2025-08-29 19:04:35.445861 | orchestrator | 2025-08-29 19:04:35.445872 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-08-29 19:04:35.445882 | orchestrator | Friday 29 August 2025 19:04:30 +0000 (0:00:13.032) 0:00:59.771 ********* 2025-08-29 19:04:35.445893 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:04:35.445904 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:04:35.445914 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:04:35.445925 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:04:35.445935 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:04:35.445952 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:04:35.445963 | orchestrator | ok: [testbed-manager] 2025-08-29 19:04:35.445974 | orchestrator | 2025-08-29 19:04:35.445984 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-08-29 19:04:35.445995 | orchestrator | Friday 29 August 2025 19:04:31 +0000 (0:00:01.550) 0:01:01.322 ********* 2025-08-29 19:04:35.446006 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:04:35.446069 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:04:35.446083 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:04:35.446094 | orchestrator | ok: [testbed-manager] 2025-08-29 19:04:35.446105 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:04:35.446116 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:04:35.446126 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:04:35.446159 | orchestrator | 2025-08-29 19:04:35.446170 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-08-29 19:04:35.446181 | orchestrator | Friday 29 August 2025 19:04:32 +0000 (0:00:00.759) 0:01:02.081 ********* 2025-08-29 19:04:35.446191 | orchestrator | ok: [testbed-manager] 2025-08-29 19:04:35.446202 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:04:35.446212 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:04:35.446223 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:04:35.446233 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:04:35.446244 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:04:35.446255 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:04:35.446265 | orchestrator | 2025-08-29 19:04:35.446276 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-08-29 19:04:35.446287 | orchestrator | Friday 29 August 2025 19:04:32 +0000 (0:00:00.240) 0:01:02.322 ********* 2025-08-29 19:04:35.446297 | orchestrator | ok: [testbed-manager] 2025-08-29 19:04:35.446308 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:04:35.446318 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:04:35.446329 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:04:35.446339 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:04:35.446350 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:04:35.446360 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:04:35.446371 | orchestrator | 2025-08-29 19:04:35.446381 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-08-29 19:04:35.446392 | orchestrator | Friday 29 August 2025 19:04:32 +0000 (0:00:00.240) 0:01:02.562 ********* 2025-08-29 19:04:35.446403 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:04:35.446414 | orchestrator | 2025-08-29 19:04:35.446425 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-08-29 19:04:35.446436 | orchestrator | Friday 29 August 2025 19:04:33 +0000 (0:00:00.268) 0:01:02.831 ********* 2025-08-29 19:04:35.446446 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:04:35.446457 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:04:35.446468 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:04:35.446478 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:04:35.446489 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:04:35.446500 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:04:35.446511 | orchestrator | ok: [testbed-manager] 2025-08-29 19:04:35.446521 | orchestrator | 2025-08-29 19:04:35.446532 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-08-29 19:04:35.446542 | orchestrator | Friday 29 August 2025 19:04:34 +0000 (0:00:01.477) 0:01:04.308 ********* 2025-08-29 19:04:35.446553 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:04:35.446564 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:04:35.446574 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:04:35.446585 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:04:35.446596 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:04:35.446606 | orchestrator | changed: [testbed-manager] 2025-08-29 19:04:35.446617 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:04:35.446635 | orchestrator | 2025-08-29 19:04:35.446646 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-08-29 19:04:35.446657 | orchestrator | Friday 29 August 2025 19:04:35 +0000 (0:00:00.522) 0:01:04.831 ********* 2025-08-29 19:04:35.446667 | orchestrator | ok: [testbed-manager] 2025-08-29 19:04:35.446678 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:04:35.446689 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:04:35.446699 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:04:35.446710 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:04:35.446720 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:04:35.446731 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:04:35.446741 | orchestrator | 2025-08-29 19:04:35.446760 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-08-29 19:06:53.025801 | orchestrator | Friday 29 August 2025 19:04:35 +0000 (0:00:00.269) 0:01:05.100 ********* 2025-08-29 19:06:53.025938 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:06:53.025975 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:06:53.025988 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:06:53.026083 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:06:53.026116 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:06:53.026229 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:06:53.026243 | orchestrator | ok: [testbed-manager] 2025-08-29 19:06:53.026255 | orchestrator | 2025-08-29 19:06:53.026269 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-08-29 19:06:53.026281 | orchestrator | Friday 29 August 2025 19:04:36 +0000 (0:00:01.019) 0:01:06.120 ********* 2025-08-29 19:06:53.026293 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:06:53.026306 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:06:53.026317 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:06:53.026327 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:06:53.026339 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:06:53.026351 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:06:53.026362 | orchestrator | changed: [testbed-manager] 2025-08-29 19:06:53.026374 | orchestrator | 2025-08-29 19:06:53.026416 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-08-29 19:06:53.026433 | orchestrator | Friday 29 August 2025 19:04:37 +0000 (0:00:01.363) 0:01:07.483 ********* 2025-08-29 19:06:53.026447 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:06:53.026459 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:06:53.026556 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:06:53.026574 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:06:53.026582 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:06:53.026590 | orchestrator | ok: [testbed-manager] 2025-08-29 19:06:53.026598 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:06:53.026607 | orchestrator | 2025-08-29 19:06:53.026615 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-08-29 19:06:53.026623 | orchestrator | Friday 29 August 2025 19:04:39 +0000 (0:00:02.088) 0:01:09.572 ********* 2025-08-29 19:06:53.026631 | orchestrator | ok: [testbed-manager] 2025-08-29 19:06:53.026639 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:06:53.026646 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:06:53.026654 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:06:53.026662 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:06:53.026670 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:06:53.026677 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:06:53.026685 | orchestrator | 2025-08-29 19:06:53.026693 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-08-29 19:06:53.026701 | orchestrator | Friday 29 August 2025 19:05:19 +0000 (0:00:40.036) 0:01:49.609 ********* 2025-08-29 19:06:53.026708 | orchestrator | changed: [testbed-manager] 2025-08-29 19:06:53.026714 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:06:53.026721 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:06:53.026728 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:06:53.026734 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:06:53.026741 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:06:53.026748 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:06:53.026771 | orchestrator | 2025-08-29 19:06:53.026779 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-08-29 19:06:53.026785 | orchestrator | Friday 29 August 2025 19:06:33 +0000 (0:01:13.944) 0:03:03.554 ********* 2025-08-29 19:06:53.026792 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:06:53.026798 | orchestrator | ok: [testbed-manager] 2025-08-29 19:06:53.026805 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:06:53.026811 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:06:53.026819 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:06:53.026825 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:06:53.026832 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:06:53.026838 | orchestrator | 2025-08-29 19:06:53.026845 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-08-29 19:06:53.026853 | orchestrator | Friday 29 August 2025 19:06:35 +0000 (0:00:01.545) 0:03:05.099 ********* 2025-08-29 19:06:53.026859 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:06:53.026866 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:06:53.026872 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:06:53.026879 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:06:53.026885 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:06:53.026892 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:06:53.026898 | orchestrator | changed: [testbed-manager] 2025-08-29 19:06:53.026905 | orchestrator | 2025-08-29 19:06:53.026911 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-08-29 19:06:53.026918 | orchestrator | Friday 29 August 2025 19:06:48 +0000 (0:00:12.916) 0:03:18.016 ********* 2025-08-29 19:06:53.026935 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-08-29 19:06:53.026947 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-08-29 19:06:53.026978 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-08-29 19:06:53.026991 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-08-29 19:06:53.026998 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-08-29 19:06:53.027005 | orchestrator | 2025-08-29 19:06:53.027012 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-08-29 19:06:53.027018 | orchestrator | Friday 29 August 2025 19:06:48 +0000 (0:00:00.434) 0:03:18.450 ********* 2025-08-29 19:06:53.027031 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-08-29 19:06:53.027037 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:06:53.027044 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-08-29 19:06:53.027051 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:06:53.027057 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-08-29 19:06:53.027138 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:06:53.027218 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-08-29 19:06:53.027256 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:06:53.027280 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 19:06:53.027302 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 19:06:53.027379 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 19:06:53.027421 | orchestrator | 2025-08-29 19:06:53.027433 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-08-29 19:06:53.027445 | orchestrator | Friday 29 August 2025 19:06:49 +0000 (0:00:00.654) 0:03:19.104 ********* 2025-08-29 19:06:53.027455 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-08-29 19:06:53.027469 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-08-29 19:06:53.027480 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-08-29 19:06:53.027491 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-08-29 19:06:53.027502 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-08-29 19:06:53.027512 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-08-29 19:06:53.027523 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-08-29 19:06:53.027534 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-08-29 19:06:53.027552 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-08-29 19:06:53.027564 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-08-29 19:06:53.027576 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:06:53.027588 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-08-29 19:06:53.027599 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-08-29 19:06:53.027611 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-08-29 19:06:53.027622 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-08-29 19:06:53.027634 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-08-29 19:06:53.027646 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-08-29 19:06:53.027657 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-08-29 19:06:53.027669 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-08-29 19:06:53.027776 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-08-29 19:06:53.027815 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-08-29 19:06:53.027838 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-08-29 19:06:56.246692 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-08-29 19:06:56.246803 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:06:56.246820 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-08-29 19:06:56.246832 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-08-29 19:06:56.246845 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-08-29 19:06:56.246856 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-08-29 19:06:56.246867 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-08-29 19:06:56.246878 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-08-29 19:06:56.246889 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-08-29 19:06:56.246900 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-08-29 19:06:56.246911 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:06:56.246921 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-08-29 19:06:56.246932 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-08-29 19:06:56.246943 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-08-29 19:06:56.246954 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-08-29 19:06:56.246964 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-08-29 19:06:56.246975 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-08-29 19:06:56.246986 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-08-29 19:06:56.246997 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-08-29 19:06:56.247007 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-08-29 19:06:56.247018 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-08-29 19:06:56.247029 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:06:56.247039 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-08-29 19:06:56.247050 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-08-29 19:06:56.247061 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-08-29 19:06:56.247072 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-08-29 19:06:56.247083 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-08-29 19:06:56.247094 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-08-29 19:06:56.247105 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-08-29 19:06:56.247115 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-08-29 19:06:56.247126 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-08-29 19:06:56.247137 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-08-29 19:06:56.247148 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-08-29 19:06:56.247221 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-08-29 19:06:56.247235 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-08-29 19:06:56.247247 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-08-29 19:06:56.247260 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-08-29 19:06:56.247272 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-08-29 19:06:56.247284 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-08-29 19:06:56.247296 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-08-29 19:06:56.247309 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-08-29 19:06:56.247321 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-08-29 19:06:56.247333 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-08-29 19:06:56.247364 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-08-29 19:06:56.247377 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-08-29 19:06:56.247389 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-08-29 19:06:56.247401 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-08-29 19:06:56.247413 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-08-29 19:06:56.247426 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-08-29 19:06:56.247438 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-08-29 19:06:56.247450 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-08-29 19:06:56.247463 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-08-29 19:06:56.247475 | orchestrator | 2025-08-29 19:06:56.247488 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-08-29 19:06:56.247501 | orchestrator | Friday 29 August 2025 19:06:53 +0000 (0:00:03.581) 0:03:22.686 ********* 2025-08-29 19:06:56.247514 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 19:06:56.247525 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 19:06:56.247536 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 19:06:56.247546 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 19:06:56.247557 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 19:06:56.247568 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 19:06:56.247578 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 19:06:56.247589 | orchestrator | 2025-08-29 19:06:56.247617 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-08-29 19:06:56.247628 | orchestrator | Friday 29 August 2025 19:06:53 +0000 (0:00:00.621) 0:03:23.308 ********* 2025-08-29 19:06:56.247639 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-08-29 19:06:56.247650 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:06:56.247661 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-08-29 19:06:56.247680 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-08-29 19:06:56.247691 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:06:56.247702 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:06:56.247712 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-08-29 19:06:56.247723 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:06:56.247734 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-08-29 19:06:56.247745 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-08-29 19:06:56.247756 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-08-29 19:06:56.247767 | orchestrator | 2025-08-29 19:06:56.247778 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-08-29 19:06:56.247788 | orchestrator | Friday 29 August 2025 19:06:55 +0000 (0:00:01.571) 0:03:24.879 ********* 2025-08-29 19:06:56.247804 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-08-29 19:06:56.247815 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-08-29 19:06:56.247826 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:06:56.247837 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:06:56.247848 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-08-29 19:06:56.247858 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:06:56.247869 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-08-29 19:06:56.247880 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:06:56.247891 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-08-29 19:06:56.247902 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-08-29 19:06:56.247912 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-08-29 19:06:56.247923 | orchestrator | 2025-08-29 19:06:56.247934 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-08-29 19:06:56.247944 | orchestrator | Friday 29 August 2025 19:06:55 +0000 (0:00:00.690) 0:03:25.570 ********* 2025-08-29 19:06:56.247955 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:06:56.247966 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:06:56.247977 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:06:56.247988 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:06:56.247998 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:06:56.248016 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:07:07.779556 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:07:07.779671 | orchestrator | 2025-08-29 19:07:07.779686 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-08-29 19:07:07.779699 | orchestrator | Friday 29 August 2025 19:06:56 +0000 (0:00:00.340) 0:03:25.911 ********* 2025-08-29 19:07:07.779710 | orchestrator | ok: [testbed-manager] 2025-08-29 19:07:07.779722 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:07:07.779733 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:07:07.779743 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:07:07.779754 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:07:07.779765 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:07:07.779775 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:07:07.779785 | orchestrator | 2025-08-29 19:07:07.779796 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-08-29 19:07:07.779807 | orchestrator | Friday 29 August 2025 19:07:01 +0000 (0:00:05.720) 0:03:31.631 ********* 2025-08-29 19:07:07.779816 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-08-29 19:07:07.779828 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:07:07.779863 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-08-29 19:07:07.779874 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:07:07.779884 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-08-29 19:07:07.779894 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-08-29 19:07:07.779904 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:07:07.779914 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-08-29 19:07:07.779924 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:07:07.779935 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-08-29 19:07:07.779945 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:07:07.779955 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:07:07.779966 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-08-29 19:07:07.779976 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:07:07.779986 | orchestrator | 2025-08-29 19:07:07.779999 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-08-29 19:07:07.780009 | orchestrator | Friday 29 August 2025 19:07:02 +0000 (0:00:00.323) 0:03:31.954 ********* 2025-08-29 19:07:07.780019 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-08-29 19:07:07.780030 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-08-29 19:07:07.780040 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-08-29 19:07:07.780050 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-08-29 19:07:07.780061 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-08-29 19:07:07.780071 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-08-29 19:07:07.780080 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-08-29 19:07:07.780090 | orchestrator | 2025-08-29 19:07:07.780100 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-08-29 19:07:07.780111 | orchestrator | Friday 29 August 2025 19:07:03 +0000 (0:00:01.018) 0:03:32.973 ********* 2025-08-29 19:07:07.780124 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:07:07.780137 | orchestrator | 2025-08-29 19:07:07.780148 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-08-29 19:07:07.780159 | orchestrator | Friday 29 August 2025 19:07:03 +0000 (0:00:00.549) 0:03:33.522 ********* 2025-08-29 19:07:07.780190 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:07:07.780202 | orchestrator | ok: [testbed-manager] 2025-08-29 19:07:07.780213 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:07:07.780223 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:07:07.780234 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:07:07.780244 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:07:07.780255 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:07:07.780265 | orchestrator | 2025-08-29 19:07:07.780275 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-08-29 19:07:07.780286 | orchestrator | Friday 29 August 2025 19:07:05 +0000 (0:00:01.176) 0:03:34.699 ********* 2025-08-29 19:07:07.780296 | orchestrator | ok: [testbed-manager] 2025-08-29 19:07:07.780307 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:07:07.780317 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:07:07.780327 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:07:07.780353 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:07:07.780364 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:07:07.780374 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:07:07.780385 | orchestrator | 2025-08-29 19:07:07.780396 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-08-29 19:07:07.780406 | orchestrator | Friday 29 August 2025 19:07:05 +0000 (0:00:00.583) 0:03:35.282 ********* 2025-08-29 19:07:07.780416 | orchestrator | changed: [testbed-manager] 2025-08-29 19:07:07.780427 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:07:07.780436 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:07:07.780450 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:07:07.780470 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:07:07.780481 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:07:07.780492 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:07:07.780502 | orchestrator | 2025-08-29 19:07:07.780512 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-08-29 19:07:07.780523 | orchestrator | Friday 29 August 2025 19:07:06 +0000 (0:00:00.635) 0:03:35.918 ********* 2025-08-29 19:07:07.780533 | orchestrator | ok: [testbed-manager] 2025-08-29 19:07:07.780543 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:07:07.780554 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:07:07.780564 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:07:07.780573 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:07:07.780582 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:07:07.780591 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:07:07.780599 | orchestrator | 2025-08-29 19:07:07.780608 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-08-29 19:07:07.780616 | orchestrator | Friday 29 August 2025 19:07:06 +0000 (0:00:00.575) 0:03:36.493 ********* 2025-08-29 19:07:07.780646 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756493006.1052346, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 19:07:07.780660 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756493020.001456, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 19:07:07.780669 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756493025.1199746, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 19:07:07.780679 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756493029.011377, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 19:07:07.780689 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756493027.3508673, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 19:07:07.780708 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756493023.820357, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 19:07:07.780718 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756493038.32712, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 19:07:07.780740 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 19:07:23.798718 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 19:07:23.798805 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 19:07:23.798813 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 19:07:23.798819 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 19:07:23.798838 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 19:07:23.798844 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 19:07:23.798849 | orchestrator | 2025-08-29 19:07:23.798855 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-08-29 19:07:23.798861 | orchestrator | Friday 29 August 2025 19:07:07 +0000 (0:00:00.942) 0:03:37.435 ********* 2025-08-29 19:07:23.798866 | orchestrator | changed: [testbed-manager] 2025-08-29 19:07:23.798872 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:07:23.798876 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:07:23.798881 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:07:23.798885 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:07:23.798890 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:07:23.798894 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:07:23.798899 | orchestrator | 2025-08-29 19:07:23.798903 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-08-29 19:07:23.798908 | orchestrator | Friday 29 August 2025 19:07:08 +0000 (0:00:01.092) 0:03:38.528 ********* 2025-08-29 19:07:23.798913 | orchestrator | changed: [testbed-manager] 2025-08-29 19:07:23.798930 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:07:23.798934 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:07:23.798939 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:07:23.798953 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:07:23.798958 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:07:23.798963 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:07:23.798967 | orchestrator | 2025-08-29 19:07:23.798972 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-08-29 19:07:23.798976 | orchestrator | Friday 29 August 2025 19:07:09 +0000 (0:00:01.121) 0:03:39.649 ********* 2025-08-29 19:07:23.798981 | orchestrator | changed: [testbed-manager] 2025-08-29 19:07:23.798985 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:07:23.798990 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:07:23.798994 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:07:23.798999 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:07:23.799003 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:07:23.799008 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:07:23.799012 | orchestrator | 2025-08-29 19:07:23.799017 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-08-29 19:07:23.799021 | orchestrator | Friday 29 August 2025 19:07:11 +0000 (0:00:01.161) 0:03:40.810 ********* 2025-08-29 19:07:23.799026 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:07:23.799030 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:07:23.799035 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:07:23.799039 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:07:23.799044 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:07:23.799048 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:07:23.799052 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:07:23.799057 | orchestrator | 2025-08-29 19:07:23.799066 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-08-29 19:07:23.799070 | orchestrator | Friday 29 August 2025 19:07:11 +0000 (0:00:00.265) 0:03:41.076 ********* 2025-08-29 19:07:23.799075 | orchestrator | ok: [testbed-manager] 2025-08-29 19:07:23.799080 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:07:23.799085 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:07:23.799089 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:07:23.799094 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:07:23.799098 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:07:23.799103 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:07:23.799108 | orchestrator | 2025-08-29 19:07:23.799112 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-08-29 19:07:23.799117 | orchestrator | Friday 29 August 2025 19:07:12 +0000 (0:00:00.798) 0:03:41.875 ********* 2025-08-29 19:07:23.799123 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:07:23.799129 | orchestrator | 2025-08-29 19:07:23.799134 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-08-29 19:07:23.799139 | orchestrator | Friday 29 August 2025 19:07:12 +0000 (0:00:00.485) 0:03:42.361 ********* 2025-08-29 19:07:23.799143 | orchestrator | ok: [testbed-manager] 2025-08-29 19:07:23.799149 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:07:23.799156 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:07:23.799164 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:07:23.799170 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:07:23.799208 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:07:23.799216 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:07:23.799223 | orchestrator | 2025-08-29 19:07:23.799230 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-08-29 19:07:23.799238 | orchestrator | Friday 29 August 2025 19:07:20 +0000 (0:00:07.620) 0:03:49.981 ********* 2025-08-29 19:07:23.799244 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:07:23.799249 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:07:23.799253 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:07:23.799258 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:07:23.799262 | orchestrator | ok: [testbed-manager] 2025-08-29 19:07:23.799267 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:07:23.799271 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:07:23.799275 | orchestrator | 2025-08-29 19:07:23.799281 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-08-29 19:07:23.799290 | orchestrator | Friday 29 August 2025 19:07:21 +0000 (0:00:01.233) 0:03:51.214 ********* 2025-08-29 19:07:23.799295 | orchestrator | ok: [testbed-manager] 2025-08-29 19:07:23.799300 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:07:23.799306 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:07:23.799311 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:07:23.799316 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:07:23.799321 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:07:23.799326 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:07:23.799331 | orchestrator | 2025-08-29 19:07:23.799336 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-08-29 19:07:23.799341 | orchestrator | Friday 29 August 2025 19:07:22 +0000 (0:00:01.010) 0:03:52.225 ********* 2025-08-29 19:07:23.799346 | orchestrator | ok: [testbed-manager] 2025-08-29 19:07:23.799351 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:07:23.799356 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:07:23.799361 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:07:23.799366 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:07:23.799371 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:07:23.799376 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:07:23.799381 | orchestrator | 2025-08-29 19:07:23.799386 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-08-29 19:07:23.799392 | orchestrator | Friday 29 August 2025 19:07:23 +0000 (0:00:00.530) 0:03:52.755 ********* 2025-08-29 19:07:23.799402 | orchestrator | ok: [testbed-manager] 2025-08-29 19:07:23.799407 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:07:23.799412 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:07:23.799417 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:07:23.799421 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:07:23.799426 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:07:23.799431 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:07:23.799436 | orchestrator | 2025-08-29 19:07:23.799441 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-08-29 19:07:23.799446 | orchestrator | Friday 29 August 2025 19:07:23 +0000 (0:00:00.330) 0:03:53.085 ********* 2025-08-29 19:07:23.799451 | orchestrator | ok: [testbed-manager] 2025-08-29 19:07:23.799456 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:07:23.799462 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:07:23.799467 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:07:23.799472 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:07:23.799481 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:08:32.782909 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:08:32.783027 | orchestrator | 2025-08-29 19:08:32.783043 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-08-29 19:08:32.783056 | orchestrator | Friday 29 August 2025 19:07:23 +0000 (0:00:00.372) 0:03:53.458 ********* 2025-08-29 19:08:32.783068 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:08:32.783079 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:08:32.783090 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:08:32.783101 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:08:32.783112 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:08:32.783123 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:08:32.783134 | orchestrator | ok: [testbed-manager] 2025-08-29 19:08:32.783145 | orchestrator | 2025-08-29 19:08:32.783156 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-08-29 19:08:32.783167 | orchestrator | Friday 29 August 2025 19:07:30 +0000 (0:00:07.214) 0:04:00.672 ********* 2025-08-29 19:08:32.783180 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:08:32.783245 | orchestrator | 2025-08-29 19:08:32.783257 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-08-29 19:08:32.783269 | orchestrator | Friday 29 August 2025 19:07:31 +0000 (0:00:00.448) 0:04:01.121 ********* 2025-08-29 19:08:32.783280 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-08-29 19:08:32.783291 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-08-29 19:08:32.783303 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-08-29 19:08:32.783313 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-08-29 19:08:32.783324 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:08:32.783335 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-08-29 19:08:32.783346 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-08-29 19:08:32.783357 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:08:32.783367 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-08-29 19:08:32.783378 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-08-29 19:08:32.783389 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:08:32.783400 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-08-29 19:08:32.783410 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:08:32.783421 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-08-29 19:08:32.783432 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-08-29 19:08:32.783443 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-08-29 19:08:32.783456 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:08:32.783491 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:08:32.783504 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-08-29 19:08:32.783517 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-08-29 19:08:32.783529 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:08:32.783541 | orchestrator | 2025-08-29 19:08:32.783553 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-08-29 19:08:32.783565 | orchestrator | Friday 29 August 2025 19:07:31 +0000 (0:00:00.375) 0:04:01.496 ********* 2025-08-29 19:08:32.783578 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:08:32.783591 | orchestrator | 2025-08-29 19:08:32.783604 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-08-29 19:08:32.783631 | orchestrator | Friday 29 August 2025 19:07:32 +0000 (0:00:00.401) 0:04:01.898 ********* 2025-08-29 19:08:32.783644 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-08-29 19:08:32.783656 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-08-29 19:08:32.783669 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:08:32.783682 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:08:32.783694 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-08-29 19:08:32.783707 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-08-29 19:08:32.783719 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:08:32.783731 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:08:32.783743 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-08-29 19:08:32.783755 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-08-29 19:08:32.783766 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:08:32.783778 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:08:32.783791 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-08-29 19:08:32.783803 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:08:32.783814 | orchestrator | 2025-08-29 19:08:32.783825 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-08-29 19:08:32.783836 | orchestrator | Friday 29 August 2025 19:07:32 +0000 (0:00:00.322) 0:04:02.220 ********* 2025-08-29 19:08:32.783846 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:08:32.783857 | orchestrator | 2025-08-29 19:08:32.783868 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-08-29 19:08:32.783878 | orchestrator | Friday 29 August 2025 19:07:32 +0000 (0:00:00.427) 0:04:02.648 ********* 2025-08-29 19:08:32.783889 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:08:32.783919 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:08:32.783931 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:08:32.783941 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:08:32.783952 | orchestrator | changed: [testbed-manager] 2025-08-29 19:08:32.783963 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:08:32.783973 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:08:32.783984 | orchestrator | 2025-08-29 19:08:32.783994 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-08-29 19:08:32.784005 | orchestrator | Friday 29 August 2025 19:08:06 +0000 (0:00:33.833) 0:04:36.481 ********* 2025-08-29 19:08:32.784016 | orchestrator | changed: [testbed-manager] 2025-08-29 19:08:32.784027 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:08:32.784037 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:08:32.784048 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:08:32.784058 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:08:32.784068 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:08:32.784086 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:08:32.784097 | orchestrator | 2025-08-29 19:08:32.784108 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-08-29 19:08:32.784119 | orchestrator | Friday 29 August 2025 19:08:14 +0000 (0:00:07.818) 0:04:44.300 ********* 2025-08-29 19:08:32.784129 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:08:32.784140 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:08:32.784150 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:08:32.784161 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:08:32.784171 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:08:32.784182 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:08:32.784214 | orchestrator | changed: [testbed-manager] 2025-08-29 19:08:32.784225 | orchestrator | 2025-08-29 19:08:32.784236 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-08-29 19:08:32.784246 | orchestrator | Friday 29 August 2025 19:08:21 +0000 (0:00:07.065) 0:04:51.365 ********* 2025-08-29 19:08:32.784257 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:08:32.784268 | orchestrator | ok: [testbed-manager] 2025-08-29 19:08:32.784279 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:08:32.784289 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:08:32.784300 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:08:32.784310 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:08:32.784321 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:08:32.784331 | orchestrator | 2025-08-29 19:08:32.784342 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-08-29 19:08:32.784354 | orchestrator | Friday 29 August 2025 19:08:23 +0000 (0:00:01.624) 0:04:52.990 ********* 2025-08-29 19:08:32.784364 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:08:32.784375 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:08:32.784385 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:08:32.784396 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:08:32.784406 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:08:32.784417 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:08:32.784427 | orchestrator | changed: [testbed-manager] 2025-08-29 19:08:32.784438 | orchestrator | 2025-08-29 19:08:32.784449 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-08-29 19:08:32.784459 | orchestrator | Friday 29 August 2025 19:08:28 +0000 (0:00:05.606) 0:04:58.596 ********* 2025-08-29 19:08:32.784470 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:08:32.784483 | orchestrator | 2025-08-29 19:08:32.784494 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-08-29 19:08:32.784504 | orchestrator | Friday 29 August 2025 19:08:29 +0000 (0:00:00.527) 0:04:59.124 ********* 2025-08-29 19:08:32.784515 | orchestrator | changed: [testbed-manager] 2025-08-29 19:08:32.784526 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:08:32.784536 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:08:32.784547 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:08:32.784557 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:08:32.784573 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:08:32.784583 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:08:32.784594 | orchestrator | 2025-08-29 19:08:32.784605 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-08-29 19:08:32.784615 | orchestrator | Friday 29 August 2025 19:08:30 +0000 (0:00:00.755) 0:04:59.879 ********* 2025-08-29 19:08:32.784626 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:08:32.784637 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:08:32.784647 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:08:32.784658 | orchestrator | ok: [testbed-manager] 2025-08-29 19:08:32.784668 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:08:32.784679 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:08:32.784689 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:08:32.784707 | orchestrator | 2025-08-29 19:08:32.784718 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-08-29 19:08:32.784728 | orchestrator | Friday 29 August 2025 19:08:31 +0000 (0:00:01.509) 0:05:01.389 ********* 2025-08-29 19:08:32.784739 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:08:32.784750 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:08:32.784760 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:08:32.784771 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:08:32.784781 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:08:32.784792 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:08:32.784802 | orchestrator | changed: [testbed-manager] 2025-08-29 19:08:32.784813 | orchestrator | 2025-08-29 19:08:32.784823 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-08-29 19:08:32.784834 | orchestrator | Friday 29 August 2025 19:08:32 +0000 (0:00:00.756) 0:05:02.146 ********* 2025-08-29 19:08:32.784845 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:08:32.784855 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:08:32.784866 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:08:32.784876 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:08:32.784887 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:08:32.784897 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:08:32.784908 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:08:32.784918 | orchestrator | 2025-08-29 19:08:32.784929 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-08-29 19:08:32.784946 | orchestrator | Friday 29 August 2025 19:08:32 +0000 (0:00:00.295) 0:05:02.442 ********* 2025-08-29 19:08:58.364388 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:08:58.364504 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:08:58.364521 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:08:58.364532 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:08:58.364544 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:08:58.364555 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:08:58.364566 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:08:58.364577 | orchestrator | 2025-08-29 19:08:58.364590 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-08-29 19:08:58.364603 | orchestrator | Friday 29 August 2025 19:08:33 +0000 (0:00:00.407) 0:05:02.849 ********* 2025-08-29 19:08:58.364614 | orchestrator | ok: [testbed-manager] 2025-08-29 19:08:58.364626 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:08:58.364636 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:08:58.364647 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:08:58.364657 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:08:58.364668 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:08:58.364679 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:08:58.364689 | orchestrator | 2025-08-29 19:08:58.364700 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-08-29 19:08:58.364711 | orchestrator | Friday 29 August 2025 19:08:33 +0000 (0:00:00.314) 0:05:03.164 ********* 2025-08-29 19:08:58.364722 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:08:58.364733 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:08:58.364744 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:08:58.364754 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:08:58.364765 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:08:58.364776 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:08:58.364787 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:08:58.364798 | orchestrator | 2025-08-29 19:08:58.364809 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-08-29 19:08:58.364820 | orchestrator | Friday 29 August 2025 19:08:33 +0000 (0:00:00.282) 0:05:03.446 ********* 2025-08-29 19:08:58.364831 | orchestrator | ok: [testbed-manager] 2025-08-29 19:08:58.364842 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:08:58.364853 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:08:58.364864 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:08:58.364874 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:08:58.364909 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:08:58.364920 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:08:58.364933 | orchestrator | 2025-08-29 19:08:58.364945 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-08-29 19:08:58.364957 | orchestrator | Friday 29 August 2025 19:08:34 +0000 (0:00:00.322) 0:05:03.769 ********* 2025-08-29 19:08:58.364969 | orchestrator | ok: [testbed-manager] =>  2025-08-29 19:08:58.364982 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 19:08:58.364994 | orchestrator | ok: [testbed-node-0] =>  2025-08-29 19:08:58.365006 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 19:08:58.365018 | orchestrator | ok: [testbed-node-1] =>  2025-08-29 19:08:58.365030 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 19:08:58.365042 | orchestrator | ok: [testbed-node-2] =>  2025-08-29 19:08:58.365053 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 19:08:58.365066 | orchestrator | ok: [testbed-node-3] =>  2025-08-29 19:08:58.365078 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 19:08:58.365089 | orchestrator | ok: [testbed-node-4] =>  2025-08-29 19:08:58.365102 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 19:08:58.365114 | orchestrator | ok: [testbed-node-5] =>  2025-08-29 19:08:58.365126 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 19:08:58.365138 | orchestrator | 2025-08-29 19:08:58.365150 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-08-29 19:08:58.365164 | orchestrator | Friday 29 August 2025 19:08:34 +0000 (0:00:00.331) 0:05:04.100 ********* 2025-08-29 19:08:58.365176 | orchestrator | ok: [testbed-manager] =>  2025-08-29 19:08:58.365210 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 19:08:58.365222 | orchestrator | ok: [testbed-node-0] =>  2025-08-29 19:08:58.365235 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 19:08:58.365247 | orchestrator | ok: [testbed-node-1] =>  2025-08-29 19:08:58.365260 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 19:08:58.365272 | orchestrator | ok: [testbed-node-2] =>  2025-08-29 19:08:58.365283 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 19:08:58.365294 | orchestrator | ok: [testbed-node-3] =>  2025-08-29 19:08:58.365305 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 19:08:58.365315 | orchestrator | ok: [testbed-node-4] =>  2025-08-29 19:08:58.365326 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 19:08:58.365337 | orchestrator | ok: [testbed-node-5] =>  2025-08-29 19:08:58.365347 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 19:08:58.365358 | orchestrator | 2025-08-29 19:08:58.365369 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-08-29 19:08:58.365380 | orchestrator | Friday 29 August 2025 19:08:34 +0000 (0:00:00.274) 0:05:04.374 ********* 2025-08-29 19:08:58.365390 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:08:58.365401 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:08:58.365411 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:08:58.365422 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:08:58.365432 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:08:58.365443 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:08:58.365453 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:08:58.365464 | orchestrator | 2025-08-29 19:08:58.365475 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-08-29 19:08:58.365485 | orchestrator | Friday 29 August 2025 19:08:34 +0000 (0:00:00.281) 0:05:04.656 ********* 2025-08-29 19:08:58.365496 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:08:58.365507 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:08:58.365517 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:08:58.365528 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:08:58.365539 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:08:58.365549 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:08:58.365560 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:08:58.365570 | orchestrator | 2025-08-29 19:08:58.365581 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-08-29 19:08:58.365601 | orchestrator | Friday 29 August 2025 19:08:35 +0000 (0:00:00.262) 0:05:04.919 ********* 2025-08-29 19:08:58.365631 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:08:58.365646 | orchestrator | 2025-08-29 19:08:58.365658 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-08-29 19:08:58.365686 | orchestrator | Friday 29 August 2025 19:08:35 +0000 (0:00:00.496) 0:05:05.415 ********* 2025-08-29 19:08:58.365698 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:08:58.365709 | orchestrator | ok: [testbed-manager] 2025-08-29 19:08:58.365719 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:08:58.365730 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:08:58.365740 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:08:58.365751 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:08:58.365761 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:08:58.365772 | orchestrator | 2025-08-29 19:08:58.365783 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-08-29 19:08:58.365794 | orchestrator | Friday 29 August 2025 19:08:36 +0000 (0:00:00.772) 0:05:06.188 ********* 2025-08-29 19:08:58.365804 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:08:58.365815 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:08:58.365825 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:08:58.365836 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:08:58.365847 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:08:58.365857 | orchestrator | ok: [testbed-manager] 2025-08-29 19:08:58.365867 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:08:58.365878 | orchestrator | 2025-08-29 19:08:58.365889 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-08-29 19:08:58.365901 | orchestrator | Friday 29 August 2025 19:08:39 +0000 (0:00:03.130) 0:05:09.318 ********* 2025-08-29 19:08:58.365912 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-08-29 19:08:58.365923 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-08-29 19:08:58.365933 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-08-29 19:08:58.365944 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-08-29 19:08:58.365955 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-08-29 19:08:58.365965 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-08-29 19:08:58.365976 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:08:58.365986 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-08-29 19:08:58.365997 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-08-29 19:08:58.366008 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:08:58.366076 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-08-29 19:08:58.366091 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-08-29 19:08:58.366102 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-08-29 19:08:58.366113 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-08-29 19:08:58.366123 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:08:58.366134 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-08-29 19:08:58.366145 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-08-29 19:08:58.366156 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:08:58.366166 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-08-29 19:08:58.366177 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-08-29 19:08:58.366188 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-08-29 19:08:58.366217 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-08-29 19:08:58.366228 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:08:58.366239 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:08:58.366250 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-08-29 19:08:58.366270 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-08-29 19:08:58.366281 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-08-29 19:08:58.366292 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:08:58.366303 | orchestrator | 2025-08-29 19:08:58.366314 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-08-29 19:08:58.366330 | orchestrator | Friday 29 August 2025 19:08:40 +0000 (0:00:00.628) 0:05:09.946 ********* 2025-08-29 19:08:58.366342 | orchestrator | ok: [testbed-manager] 2025-08-29 19:08:58.366353 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:08:58.366364 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:08:58.366374 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:08:58.366385 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:08:58.366396 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:08:58.366406 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:08:58.366417 | orchestrator | 2025-08-29 19:08:58.366428 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-08-29 19:08:58.366439 | orchestrator | Friday 29 August 2025 19:08:46 +0000 (0:00:05.948) 0:05:15.895 ********* 2025-08-29 19:08:58.366450 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:08:58.366460 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:08:58.366471 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:08:58.366482 | orchestrator | ok: [testbed-manager] 2025-08-29 19:08:58.366493 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:08:58.366503 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:08:58.366514 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:08:58.366525 | orchestrator | 2025-08-29 19:08:58.366536 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-08-29 19:08:58.366547 | orchestrator | Friday 29 August 2025 19:08:47 +0000 (0:00:01.231) 0:05:17.126 ********* 2025-08-29 19:08:58.366558 | orchestrator | ok: [testbed-manager] 2025-08-29 19:08:58.366568 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:08:58.366579 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:08:58.366590 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:08:58.366601 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:08:58.366611 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:08:58.366622 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:08:58.366633 | orchestrator | 2025-08-29 19:08:58.366643 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-08-29 19:08:58.366654 | orchestrator | Friday 29 August 2025 19:08:55 +0000 (0:00:07.710) 0:05:24.836 ********* 2025-08-29 19:08:58.366665 | orchestrator | changed: [testbed-manager] 2025-08-29 19:08:58.366676 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:08:58.366687 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:08:58.366707 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:09:41.570920 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:09:41.571050 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:09:41.571068 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:09:41.571082 | orchestrator | 2025-08-29 19:09:41.571096 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-08-29 19:09:41.571111 | orchestrator | Friday 29 August 2025 19:08:58 +0000 (0:00:03.182) 0:05:28.018 ********* 2025-08-29 19:09:41.571125 | orchestrator | ok: [testbed-manager] 2025-08-29 19:09:41.571140 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:09:41.571154 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:09:41.571168 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:09:41.571207 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:09:41.571222 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:09:41.571235 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:09:41.571249 | orchestrator | 2025-08-29 19:09:41.571264 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-08-29 19:09:41.571279 | orchestrator | Friday 29 August 2025 19:08:59 +0000 (0:00:01.320) 0:05:29.338 ********* 2025-08-29 19:09:41.571322 | orchestrator | ok: [testbed-manager] 2025-08-29 19:09:41.571337 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:09:41.571352 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:09:41.571366 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:09:41.571381 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:09:41.571397 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:09:41.571410 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:09:41.571426 | orchestrator | 2025-08-29 19:09:41.571444 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-08-29 19:09:41.571461 | orchestrator | Friday 29 August 2025 19:09:01 +0000 (0:00:01.481) 0:05:30.820 ********* 2025-08-29 19:09:41.571478 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:09:41.571496 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:09:41.571513 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:09:41.571530 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:09:41.571546 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:09:41.571565 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:09:41.571579 | orchestrator | changed: [testbed-manager] 2025-08-29 19:09:41.571594 | orchestrator | 2025-08-29 19:09:41.571608 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-08-29 19:09:41.571622 | orchestrator | Friday 29 August 2025 19:09:01 +0000 (0:00:00.606) 0:05:31.426 ********* 2025-08-29 19:09:41.571636 | orchestrator | ok: [testbed-manager] 2025-08-29 19:09:41.571651 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:09:41.571663 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:09:41.571678 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:09:41.571693 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:09:41.571710 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:09:41.571725 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:09:41.571737 | orchestrator | 2025-08-29 19:09:41.571751 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-08-29 19:09:41.571764 | orchestrator | Friday 29 August 2025 19:09:11 +0000 (0:00:09.388) 0:05:40.815 ********* 2025-08-29 19:09:41.571776 | orchestrator | changed: [testbed-manager] 2025-08-29 19:09:41.571788 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:09:41.571801 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:09:41.571814 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:09:41.571827 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:09:41.571839 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:09:41.571852 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:09:41.571864 | orchestrator | 2025-08-29 19:09:41.571878 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-08-29 19:09:41.571891 | orchestrator | Friday 29 August 2025 19:09:12 +0000 (0:00:00.961) 0:05:41.776 ********* 2025-08-29 19:09:41.571903 | orchestrator | ok: [testbed-manager] 2025-08-29 19:09:41.571915 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:09:41.571928 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:09:41.571941 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:09:41.571956 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:09:41.571969 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:09:41.572000 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:09:41.572014 | orchestrator | 2025-08-29 19:09:41.572027 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-08-29 19:09:41.572041 | orchestrator | Friday 29 August 2025 19:09:20 +0000 (0:00:08.400) 0:05:50.177 ********* 2025-08-29 19:09:41.572055 | orchestrator | ok: [testbed-manager] 2025-08-29 19:09:41.572070 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:09:41.572084 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:09:41.572099 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:09:41.572113 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:09:41.572125 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:09:41.572139 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:09:41.572152 | orchestrator | 2025-08-29 19:09:41.572166 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-08-29 19:09:41.572225 | orchestrator | Friday 29 August 2025 19:09:31 +0000 (0:00:10.917) 0:06:01.094 ********* 2025-08-29 19:09:41.572236 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-08-29 19:09:41.572244 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-08-29 19:09:41.572252 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-08-29 19:09:41.572260 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-08-29 19:09:41.572267 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-08-29 19:09:41.572275 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-08-29 19:09:41.572283 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-08-29 19:09:41.572291 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-08-29 19:09:41.572299 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-08-29 19:09:41.572307 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-08-29 19:09:41.572315 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-08-29 19:09:41.572322 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-08-29 19:09:41.572330 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-08-29 19:09:41.572338 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-08-29 19:09:41.572346 | orchestrator | 2025-08-29 19:09:41.572354 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-08-29 19:09:41.572382 | orchestrator | Friday 29 August 2025 19:09:32 +0000 (0:00:01.241) 0:06:02.336 ********* 2025-08-29 19:09:41.572391 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:09:41.572399 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:09:41.572407 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:09:41.572415 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:09:41.572422 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:09:41.572430 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:09:41.572438 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:09:41.572445 | orchestrator | 2025-08-29 19:09:41.572453 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-08-29 19:09:41.572461 | orchestrator | Friday 29 August 2025 19:09:33 +0000 (0:00:00.560) 0:06:02.896 ********* 2025-08-29 19:09:41.572469 | orchestrator | ok: [testbed-manager] 2025-08-29 19:09:41.572477 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:09:41.572485 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:09:41.572492 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:09:41.572500 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:09:41.572508 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:09:41.572516 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:09:41.572523 | orchestrator | 2025-08-29 19:09:41.572531 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-08-29 19:09:41.572540 | orchestrator | Friday 29 August 2025 19:09:36 +0000 (0:00:03.597) 0:06:06.494 ********* 2025-08-29 19:09:41.572548 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:09:41.572555 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:09:41.572563 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:09:41.572571 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:09:41.572578 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:09:41.572586 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:09:41.572594 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:09:41.572602 | orchestrator | 2025-08-29 19:09:41.572611 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-08-29 19:09:41.572619 | orchestrator | Friday 29 August 2025 19:09:37 +0000 (0:00:00.578) 0:06:07.073 ********* 2025-08-29 19:09:41.572627 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-08-29 19:09:41.572635 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-08-29 19:09:41.572643 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:09:41.572651 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-08-29 19:09:41.572665 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-08-29 19:09:41.572673 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:09:41.572680 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-08-29 19:09:41.572688 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-08-29 19:09:41.572696 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:09:41.572704 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-08-29 19:09:41.572712 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-08-29 19:09:41.572720 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:09:41.572727 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-08-29 19:09:41.572735 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-08-29 19:09:41.572743 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:09:41.572751 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-08-29 19:09:41.572759 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-08-29 19:09:41.572766 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:09:41.572774 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-08-29 19:09:41.572782 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-08-29 19:09:41.572790 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:09:41.572798 | orchestrator | 2025-08-29 19:09:41.572812 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-08-29 19:09:41.572820 | orchestrator | Friday 29 August 2025 19:09:38 +0000 (0:00:00.834) 0:06:07.907 ********* 2025-08-29 19:09:41.572828 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:09:41.572836 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:09:41.572844 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:09:41.572852 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:09:41.572859 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:09:41.572867 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:09:41.572875 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:09:41.572883 | orchestrator | 2025-08-29 19:09:41.572891 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-08-29 19:09:41.572898 | orchestrator | Friday 29 August 2025 19:09:38 +0000 (0:00:00.516) 0:06:08.423 ********* 2025-08-29 19:09:41.572906 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:09:41.572914 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:09:41.572922 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:09:41.572930 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:09:41.572938 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:09:41.572945 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:09:41.572953 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:09:41.572961 | orchestrator | 2025-08-29 19:09:41.572968 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-08-29 19:09:41.572976 | orchestrator | Friday 29 August 2025 19:09:39 +0000 (0:00:00.521) 0:06:08.945 ********* 2025-08-29 19:09:41.572984 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:09:41.572992 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:09:41.573000 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:09:41.573007 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:09:41.573015 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:09:41.573023 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:09:41.573031 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:09:41.573038 | orchestrator | 2025-08-29 19:09:41.573046 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-08-29 19:09:41.573054 | orchestrator | Friday 29 August 2025 19:09:39 +0000 (0:00:00.586) 0:06:09.532 ********* 2025-08-29 19:09:41.573062 | orchestrator | ok: [testbed-manager] 2025-08-29 19:09:41.573075 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:10:04.302200 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:10:04.302316 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:10:04.302326 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:10:04.302332 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:10:04.302338 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:10:04.302345 | orchestrator | 2025-08-29 19:10:04.302352 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-08-29 19:10:04.302360 | orchestrator | Friday 29 August 2025 19:09:41 +0000 (0:00:01.697) 0:06:11.229 ********* 2025-08-29 19:10:04.302368 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:10:04.302377 | orchestrator | 2025-08-29 19:10:04.302384 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-08-29 19:10:04.302391 | orchestrator | Friday 29 August 2025 19:09:42 +0000 (0:00:01.165) 0:06:12.395 ********* 2025-08-29 19:10:04.302397 | orchestrator | ok: [testbed-manager] 2025-08-29 19:10:04.302404 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:10:04.302411 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:10:04.302417 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:10:04.302423 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:10:04.302429 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:10:04.302435 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:10:04.302442 | orchestrator | 2025-08-29 19:10:04.302449 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-08-29 19:10:04.302455 | orchestrator | Friday 29 August 2025 19:09:43 +0000 (0:00:00.851) 0:06:13.247 ********* 2025-08-29 19:10:04.302461 | orchestrator | ok: [testbed-manager] 2025-08-29 19:10:04.302467 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:10:04.302473 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:10:04.302479 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:10:04.302485 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:10:04.302491 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:10:04.302497 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:10:04.302504 | orchestrator | 2025-08-29 19:10:04.302511 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-08-29 19:10:04.302517 | orchestrator | Friday 29 August 2025 19:09:44 +0000 (0:00:00.894) 0:06:14.141 ********* 2025-08-29 19:10:04.302523 | orchestrator | ok: [testbed-manager] 2025-08-29 19:10:04.302529 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:10:04.302536 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:10:04.302542 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:10:04.302548 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:10:04.302554 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:10:04.302560 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:10:04.302566 | orchestrator | 2025-08-29 19:10:04.302572 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-08-29 19:10:04.302580 | orchestrator | Friday 29 August 2025 19:09:46 +0000 (0:00:01.821) 0:06:15.963 ********* 2025-08-29 19:10:04.302586 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:10:04.302593 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:10:04.302600 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:10:04.302606 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:10:04.302612 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:10:04.302619 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:10:04.302624 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:10:04.302631 | orchestrator | 2025-08-29 19:10:04.302637 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-08-29 19:10:04.302643 | orchestrator | Friday 29 August 2025 19:09:47 +0000 (0:00:01.391) 0:06:17.355 ********* 2025-08-29 19:10:04.302649 | orchestrator | ok: [testbed-manager] 2025-08-29 19:10:04.302655 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:10:04.302661 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:10:04.302667 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:10:04.302678 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:10:04.302685 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:10:04.302691 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:10:04.302697 | orchestrator | 2025-08-29 19:10:04.302703 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-08-29 19:10:04.302710 | orchestrator | Friday 29 August 2025 19:09:49 +0000 (0:00:01.324) 0:06:18.680 ********* 2025-08-29 19:10:04.302716 | orchestrator | changed: [testbed-manager] 2025-08-29 19:10:04.302722 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:10:04.302728 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:10:04.302734 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:10:04.302740 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:10:04.302746 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:10:04.302753 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:10:04.302758 | orchestrator | 2025-08-29 19:10:04.302765 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-08-29 19:10:04.302771 | orchestrator | Friday 29 August 2025 19:09:50 +0000 (0:00:01.469) 0:06:20.150 ********* 2025-08-29 19:10:04.302777 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:10:04.302784 | orchestrator | 2025-08-29 19:10:04.302790 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-08-29 19:10:04.302797 | orchestrator | Friday 29 August 2025 19:09:51 +0000 (0:00:01.212) 0:06:21.362 ********* 2025-08-29 19:10:04.302803 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:10:04.302809 | orchestrator | ok: [testbed-manager] 2025-08-29 19:10:04.302816 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:10:04.302822 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:10:04.302829 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:10:04.302835 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:10:04.302843 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:10:04.302849 | orchestrator | 2025-08-29 19:10:04.302855 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-08-29 19:10:04.302862 | orchestrator | Friday 29 August 2025 19:09:53 +0000 (0:00:01.468) 0:06:22.831 ********* 2025-08-29 19:10:04.302868 | orchestrator | ok: [testbed-manager] 2025-08-29 19:10:04.302874 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:10:04.302896 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:10:04.302903 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:10:04.302909 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:10:04.302915 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:10:04.302921 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:10:04.302928 | orchestrator | 2025-08-29 19:10:04.302934 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-08-29 19:10:04.302941 | orchestrator | Friday 29 August 2025 19:09:54 +0000 (0:00:01.181) 0:06:24.012 ********* 2025-08-29 19:10:04.302947 | orchestrator | ok: [testbed-manager] 2025-08-29 19:10:04.302953 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:10:04.302959 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:10:04.302966 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:10:04.302972 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:10:04.302979 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:10:04.302985 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:10:04.302991 | orchestrator | 2025-08-29 19:10:04.302998 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-08-29 19:10:04.303004 | orchestrator | Friday 29 August 2025 19:09:55 +0000 (0:00:01.178) 0:06:25.191 ********* 2025-08-29 19:10:04.303010 | orchestrator | ok: [testbed-manager] 2025-08-29 19:10:04.303016 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:10:04.303022 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:10:04.303028 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:10:04.303034 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:10:04.303041 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:10:04.303051 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:10:04.303056 | orchestrator | 2025-08-29 19:10:04.303063 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-08-29 19:10:04.303069 | orchestrator | Friday 29 August 2025 19:09:56 +0000 (0:00:01.118) 0:06:26.310 ********* 2025-08-29 19:10:04.303076 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:10:04.303082 | orchestrator | 2025-08-29 19:10:04.303088 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 19:10:04.303094 | orchestrator | Friday 29 August 2025 19:09:57 +0000 (0:00:01.202) 0:06:27.512 ********* 2025-08-29 19:10:04.303100 | orchestrator | 2025-08-29 19:10:04.303106 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 19:10:04.303113 | orchestrator | Friday 29 August 2025 19:09:57 +0000 (0:00:00.039) 0:06:27.552 ********* 2025-08-29 19:10:04.303119 | orchestrator | 2025-08-29 19:10:04.303125 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 19:10:04.303132 | orchestrator | Friday 29 August 2025 19:09:57 +0000 (0:00:00.044) 0:06:27.597 ********* 2025-08-29 19:10:04.303138 | orchestrator | 2025-08-29 19:10:04.303144 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 19:10:04.303151 | orchestrator | Friday 29 August 2025 19:09:57 +0000 (0:00:00.039) 0:06:27.636 ********* 2025-08-29 19:10:04.303157 | orchestrator | 2025-08-29 19:10:04.303163 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 19:10:04.303185 | orchestrator | Friday 29 August 2025 19:09:58 +0000 (0:00:00.038) 0:06:27.674 ********* 2025-08-29 19:10:04.303192 | orchestrator | 2025-08-29 19:10:04.303198 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 19:10:04.303204 | orchestrator | Friday 29 August 2025 19:09:58 +0000 (0:00:00.044) 0:06:27.719 ********* 2025-08-29 19:10:04.303210 | orchestrator | 2025-08-29 19:10:04.303216 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 19:10:04.303222 | orchestrator | Friday 29 August 2025 19:09:58 +0000 (0:00:00.038) 0:06:27.758 ********* 2025-08-29 19:10:04.303229 | orchestrator | 2025-08-29 19:10:04.303245 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-08-29 19:10:04.303254 | orchestrator | Friday 29 August 2025 19:09:58 +0000 (0:00:00.038) 0:06:27.796 ********* 2025-08-29 19:10:04.303260 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:10:04.303265 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:10:04.303270 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:10:04.303276 | orchestrator | 2025-08-29 19:10:04.303282 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-08-29 19:10:04.303288 | orchestrator | Friday 29 August 2025 19:09:59 +0000 (0:00:01.152) 0:06:28.948 ********* 2025-08-29 19:10:04.303294 | orchestrator | changed: [testbed-manager] 2025-08-29 19:10:04.303299 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:10:04.303305 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:10:04.303311 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:10:04.303317 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:10:04.303322 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:10:04.303328 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:10:04.303333 | orchestrator | 2025-08-29 19:10:04.303339 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-08-29 19:10:04.303344 | orchestrator | Friday 29 August 2025 19:10:00 +0000 (0:00:01.374) 0:06:30.322 ********* 2025-08-29 19:10:04.303351 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:10:04.303356 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:10:04.303362 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:10:04.303367 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:10:04.303373 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:10:04.303378 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:10:04.303388 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:10:04.303394 | orchestrator | 2025-08-29 19:10:04.303400 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-08-29 19:10:04.303406 | orchestrator | Friday 29 August 2025 19:10:03 +0000 (0:00:02.456) 0:06:32.779 ********* 2025-08-29 19:10:04.303412 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:10:04.303417 | orchestrator | 2025-08-29 19:10:04.303423 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-08-29 19:10:04.303429 | orchestrator | Friday 29 August 2025 19:10:03 +0000 (0:00:00.120) 0:06:32.900 ********* 2025-08-29 19:10:04.303435 | orchestrator | ok: [testbed-manager] 2025-08-29 19:10:04.303440 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:10:04.303446 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:10:04.303452 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:10:04.303463 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:10:29.657456 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:10:29.657573 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:10:29.657591 | orchestrator | 2025-08-29 19:10:29.657605 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-08-29 19:10:29.657617 | orchestrator | Friday 29 August 2025 19:10:04 +0000 (0:00:01.055) 0:06:33.956 ********* 2025-08-29 19:10:29.657629 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:10:29.657640 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:10:29.657652 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:10:29.657663 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:10:29.657673 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:10:29.657684 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:10:29.657695 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:10:29.657706 | orchestrator | 2025-08-29 19:10:29.657717 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-08-29 19:10:29.657728 | orchestrator | Friday 29 August 2025 19:10:04 +0000 (0:00:00.514) 0:06:34.471 ********* 2025-08-29 19:10:29.657740 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:10:29.657753 | orchestrator | 2025-08-29 19:10:29.657765 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-08-29 19:10:29.657775 | orchestrator | Friday 29 August 2025 19:10:05 +0000 (0:00:01.150) 0:06:35.622 ********* 2025-08-29 19:10:29.657786 | orchestrator | ok: [testbed-manager] 2025-08-29 19:10:29.657798 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:10:29.657809 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:10:29.657820 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:10:29.657832 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:10:29.657842 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:10:29.657853 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:10:29.657863 | orchestrator | 2025-08-29 19:10:29.657874 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-08-29 19:10:29.657885 | orchestrator | Friday 29 August 2025 19:10:06 +0000 (0:00:00.914) 0:06:36.536 ********* 2025-08-29 19:10:29.657896 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-08-29 19:10:29.657907 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-08-29 19:10:29.657918 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-08-29 19:10:29.657929 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-08-29 19:10:29.657939 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-08-29 19:10:29.657950 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-08-29 19:10:29.657961 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-08-29 19:10:29.657972 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-08-29 19:10:29.657984 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-08-29 19:10:29.658110 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-08-29 19:10:29.658128 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-08-29 19:10:29.658141 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-08-29 19:10:29.658153 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-08-29 19:10:29.658193 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-08-29 19:10:29.658206 | orchestrator | 2025-08-29 19:10:29.658218 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-08-29 19:10:29.658231 | orchestrator | Friday 29 August 2025 19:10:09 +0000 (0:00:02.447) 0:06:38.984 ********* 2025-08-29 19:10:29.658243 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:10:29.658256 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:10:29.658283 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:10:29.658296 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:10:29.658307 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:10:29.658319 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:10:29.658331 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:10:29.658343 | orchestrator | 2025-08-29 19:10:29.658354 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-08-29 19:10:29.658365 | orchestrator | Friday 29 August 2025 19:10:09 +0000 (0:00:00.513) 0:06:39.497 ********* 2025-08-29 19:10:29.658378 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:10:29.658391 | orchestrator | 2025-08-29 19:10:29.658402 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-08-29 19:10:29.658413 | orchestrator | Friday 29 August 2025 19:10:10 +0000 (0:00:01.000) 0:06:40.497 ********* 2025-08-29 19:10:29.658424 | orchestrator | ok: [testbed-manager] 2025-08-29 19:10:29.658435 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:10:29.658445 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:10:29.658456 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:10:29.658467 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:10:29.658477 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:10:29.658488 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:10:29.658499 | orchestrator | 2025-08-29 19:10:29.658510 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-08-29 19:10:29.658521 | orchestrator | Friday 29 August 2025 19:10:11 +0000 (0:00:00.890) 0:06:41.387 ********* 2025-08-29 19:10:29.658531 | orchestrator | ok: [testbed-manager] 2025-08-29 19:10:29.658542 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:10:29.658553 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:10:29.658563 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:10:29.658574 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:10:29.658585 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:10:29.658596 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:10:29.658606 | orchestrator | 2025-08-29 19:10:29.658617 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-08-29 19:10:29.658647 | orchestrator | Friday 29 August 2025 19:10:12 +0000 (0:00:00.799) 0:06:42.187 ********* 2025-08-29 19:10:29.658658 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:10:29.658669 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:10:29.658680 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:10:29.658690 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:10:29.658701 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:10:29.658712 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:10:29.658729 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:10:29.658747 | orchestrator | 2025-08-29 19:10:29.658766 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-08-29 19:10:29.658786 | orchestrator | Friday 29 August 2025 19:10:13 +0000 (0:00:00.498) 0:06:42.686 ********* 2025-08-29 19:10:29.658805 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:10:29.658839 | orchestrator | ok: [testbed-manager] 2025-08-29 19:10:29.658851 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:10:29.658862 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:10:29.658872 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:10:29.658883 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:10:29.658893 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:10:29.658904 | orchestrator | 2025-08-29 19:10:29.658915 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-08-29 19:10:29.658925 | orchestrator | Friday 29 August 2025 19:10:14 +0000 (0:00:01.459) 0:06:44.145 ********* 2025-08-29 19:10:29.658936 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:10:29.658946 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:10:29.658957 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:10:29.658968 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:10:29.658978 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:10:29.658988 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:10:29.658999 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:10:29.659010 | orchestrator | 2025-08-29 19:10:29.659021 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-08-29 19:10:29.659032 | orchestrator | Friday 29 August 2025 19:10:14 +0000 (0:00:00.519) 0:06:44.665 ********* 2025-08-29 19:10:29.659042 | orchestrator | ok: [testbed-manager] 2025-08-29 19:10:29.659053 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:10:29.659064 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:10:29.659074 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:10:29.659085 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:10:29.659095 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:10:29.659106 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:10:29.659117 | orchestrator | 2025-08-29 19:10:29.659127 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-08-29 19:10:29.659138 | orchestrator | Friday 29 August 2025 19:10:22 +0000 (0:00:07.310) 0:06:51.976 ********* 2025-08-29 19:10:29.659149 | orchestrator | ok: [testbed-manager] 2025-08-29 19:10:29.659232 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:10:29.659244 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:10:29.659254 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:10:29.659265 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:10:29.659275 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:10:29.659286 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:10:29.659296 | orchestrator | 2025-08-29 19:10:29.659307 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-08-29 19:10:29.659318 | orchestrator | Friday 29 August 2025 19:10:23 +0000 (0:00:01.355) 0:06:53.332 ********* 2025-08-29 19:10:29.659329 | orchestrator | ok: [testbed-manager] 2025-08-29 19:10:29.659339 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:10:29.659350 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:10:29.659361 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:10:29.659371 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:10:29.659382 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:10:29.659392 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:10:29.659403 | orchestrator | 2025-08-29 19:10:29.659414 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-08-29 19:10:29.659424 | orchestrator | Friday 29 August 2025 19:10:25 +0000 (0:00:01.857) 0:06:55.189 ********* 2025-08-29 19:10:29.659435 | orchestrator | ok: [testbed-manager] 2025-08-29 19:10:29.659452 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:10:29.659463 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:10:29.659473 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:10:29.659484 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:10:29.659494 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:10:29.659505 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:10:29.659515 | orchestrator | 2025-08-29 19:10:29.659526 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-08-29 19:10:29.659537 | orchestrator | Friday 29 August 2025 19:10:27 +0000 (0:00:01.726) 0:06:56.915 ********* 2025-08-29 19:10:29.659556 | orchestrator | ok: [testbed-manager] 2025-08-29 19:10:29.659566 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:10:29.659577 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:10:29.659587 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:10:29.659598 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:10:29.659608 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:10:29.659619 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:10:29.659629 | orchestrator | 2025-08-29 19:10:29.659640 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-08-29 19:10:29.659651 | orchestrator | Friday 29 August 2025 19:10:28 +0000 (0:00:00.854) 0:06:57.770 ********* 2025-08-29 19:10:29.659662 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:10:29.659673 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:10:29.659683 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:10:29.659694 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:10:29.659705 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:10:29.659715 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:10:29.659725 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:10:29.659736 | orchestrator | 2025-08-29 19:10:29.659747 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-08-29 19:10:29.659757 | orchestrator | Friday 29 August 2025 19:10:29 +0000 (0:00:01.006) 0:06:58.776 ********* 2025-08-29 19:10:29.659768 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:10:29.659778 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:10:29.659789 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:10:29.659799 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:10:29.659810 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:10:29.659821 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:10:29.659831 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:10:29.659842 | orchestrator | 2025-08-29 19:10:29.659861 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-08-29 19:11:01.928829 | orchestrator | Friday 29 August 2025 19:10:29 +0000 (0:00:00.541) 0:06:59.318 ********* 2025-08-29 19:11:01.928946 | orchestrator | ok: [testbed-manager] 2025-08-29 19:11:01.928964 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:11:01.928976 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:11:01.928987 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:11:01.928998 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:11:01.929009 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:11:01.929019 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:11:01.929030 | orchestrator | 2025-08-29 19:11:01.929042 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-08-29 19:11:01.929054 | orchestrator | Friday 29 August 2025 19:10:30 +0000 (0:00:00.516) 0:06:59.834 ********* 2025-08-29 19:11:01.929066 | orchestrator | ok: [testbed-manager] 2025-08-29 19:11:01.929077 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:11:01.929088 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:11:01.929099 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:11:01.929109 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:11:01.929120 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:11:01.929131 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:11:01.929202 | orchestrator | 2025-08-29 19:11:01.929215 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-08-29 19:11:01.929226 | orchestrator | Friday 29 August 2025 19:10:30 +0000 (0:00:00.543) 0:07:00.378 ********* 2025-08-29 19:11:01.929237 | orchestrator | ok: [testbed-manager] 2025-08-29 19:11:01.929248 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:11:01.929259 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:11:01.929270 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:11:01.929281 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:11:01.929291 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:11:01.929302 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:11:01.929313 | orchestrator | 2025-08-29 19:11:01.929324 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-08-29 19:11:01.929359 | orchestrator | Friday 29 August 2025 19:10:31 +0000 (0:00:00.513) 0:07:00.891 ********* 2025-08-29 19:11:01.929373 | orchestrator | ok: [testbed-manager] 2025-08-29 19:11:01.929386 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:11:01.929398 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:11:01.929410 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:11:01.929422 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:11:01.929434 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:11:01.929447 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:11:01.929459 | orchestrator | 2025-08-29 19:11:01.929472 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-08-29 19:11:01.929484 | orchestrator | Friday 29 August 2025 19:10:36 +0000 (0:00:05.638) 0:07:06.530 ********* 2025-08-29 19:11:01.929497 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:11:01.929510 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:11:01.929523 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:11:01.929535 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:11:01.929547 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:11:01.929559 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:11:01.929571 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:11:01.929584 | orchestrator | 2025-08-29 19:11:01.929597 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-08-29 19:11:01.929610 | orchestrator | Friday 29 August 2025 19:10:37 +0000 (0:00:00.537) 0:07:07.067 ********* 2025-08-29 19:11:01.929625 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:11:01.929640 | orchestrator | 2025-08-29 19:11:01.929652 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-08-29 19:11:01.929665 | orchestrator | Friday 29 August 2025 19:10:38 +0000 (0:00:00.803) 0:07:07.871 ********* 2025-08-29 19:11:01.929678 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:11:01.929698 | orchestrator | ok: [testbed-manager] 2025-08-29 19:11:01.929741 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:11:01.929769 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:11:01.929787 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:11:01.929806 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:11:01.929824 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:11:01.929844 | orchestrator | 2025-08-29 19:11:01.929865 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-08-29 19:11:01.929884 | orchestrator | Friday 29 August 2025 19:10:40 +0000 (0:00:02.014) 0:07:09.886 ********* 2025-08-29 19:11:01.929901 | orchestrator | ok: [testbed-manager] 2025-08-29 19:11:01.929912 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:11:01.929923 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:11:01.929933 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:11:01.929944 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:11:01.929954 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:11:01.929965 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:11:01.929975 | orchestrator | 2025-08-29 19:11:01.929986 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-08-29 19:11:01.929997 | orchestrator | Friday 29 August 2025 19:10:41 +0000 (0:00:01.169) 0:07:11.056 ********* 2025-08-29 19:11:01.930007 | orchestrator | ok: [testbed-manager] 2025-08-29 19:11:01.930081 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:11:01.930094 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:11:01.930104 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:11:01.930115 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:11:01.930125 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:11:01.930136 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:11:01.930166 | orchestrator | 2025-08-29 19:11:01.930177 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-08-29 19:11:01.930187 | orchestrator | Friday 29 August 2025 19:10:42 +0000 (0:00:00.875) 0:07:11.931 ********* 2025-08-29 19:11:01.930211 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 19:11:01.930224 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 19:11:01.930235 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 19:11:01.930266 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 19:11:01.930278 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 19:11:01.930289 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 19:11:01.930299 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 19:11:01.930310 | orchestrator | 2025-08-29 19:11:01.930321 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-08-29 19:11:01.930332 | orchestrator | Friday 29 August 2025 19:10:43 +0000 (0:00:01.669) 0:07:13.601 ********* 2025-08-29 19:11:01.930343 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:11:01.930355 | orchestrator | 2025-08-29 19:11:01.930366 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-08-29 19:11:01.930377 | orchestrator | Friday 29 August 2025 19:10:44 +0000 (0:00:01.039) 0:07:14.640 ********* 2025-08-29 19:11:01.930387 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:11:01.930398 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:11:01.930409 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:11:01.930420 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:11:01.930431 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:11:01.930441 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:11:01.930452 | orchestrator | changed: [testbed-manager] 2025-08-29 19:11:01.930462 | orchestrator | 2025-08-29 19:11:01.930473 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-08-29 19:11:01.930483 | orchestrator | Friday 29 August 2025 19:10:54 +0000 (0:00:09.071) 0:07:23.712 ********* 2025-08-29 19:11:01.930494 | orchestrator | ok: [testbed-manager] 2025-08-29 19:11:01.930505 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:11:01.930516 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:11:01.930526 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:11:01.930537 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:11:01.930547 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:11:01.930558 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:11:01.930568 | orchestrator | 2025-08-29 19:11:01.930579 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-08-29 19:11:01.930590 | orchestrator | Friday 29 August 2025 19:10:55 +0000 (0:00:01.955) 0:07:25.667 ********* 2025-08-29 19:11:01.930600 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:11:01.930611 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:11:01.930622 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:11:01.930632 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:11:01.930643 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:11:01.930653 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:11:01.930664 | orchestrator | 2025-08-29 19:11:01.930675 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-08-29 19:11:01.930685 | orchestrator | Friday 29 August 2025 19:10:57 +0000 (0:00:01.250) 0:07:26.918 ********* 2025-08-29 19:11:01.930696 | orchestrator | changed: [testbed-manager] 2025-08-29 19:11:01.930714 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:11:01.930725 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:11:01.930735 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:11:01.930746 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:11:01.930757 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:11:01.930774 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:11:01.930785 | orchestrator | 2025-08-29 19:11:01.930796 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-08-29 19:11:01.930807 | orchestrator | 2025-08-29 19:11:01.930817 | orchestrator | TASK [Include hardening role] ************************************************** 2025-08-29 19:11:01.930828 | orchestrator | Friday 29 August 2025 19:10:58 +0000 (0:00:01.194) 0:07:28.112 ********* 2025-08-29 19:11:01.930839 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:11:01.930850 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:11:01.930860 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:11:01.930871 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:11:01.930881 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:11:01.930892 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:11:01.930902 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:11:01.930913 | orchestrator | 2025-08-29 19:11:01.930924 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-08-29 19:11:01.930934 | orchestrator | 2025-08-29 19:11:01.930945 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-08-29 19:11:01.930956 | orchestrator | Friday 29 August 2025 19:10:59 +0000 (0:00:00.605) 0:07:28.717 ********* 2025-08-29 19:11:01.930967 | orchestrator | changed: [testbed-manager] 2025-08-29 19:11:01.930978 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:11:01.930988 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:11:01.930999 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:11:01.931009 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:11:01.931020 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:11:01.931030 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:11:01.931041 | orchestrator | 2025-08-29 19:11:01.931052 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-08-29 19:11:01.931062 | orchestrator | Friday 29 August 2025 19:11:00 +0000 (0:00:01.482) 0:07:30.200 ********* 2025-08-29 19:11:01.931073 | orchestrator | ok: [testbed-manager] 2025-08-29 19:11:01.931084 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:11:01.931094 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:11:01.931105 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:11:01.931115 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:11:01.931126 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:11:01.931136 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:11:01.931166 | orchestrator | 2025-08-29 19:11:01.931177 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-08-29 19:11:01.931195 | orchestrator | Friday 29 August 2025 19:11:01 +0000 (0:00:01.375) 0:07:31.576 ********* 2025-08-29 19:11:24.961077 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:11:24.961290 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:11:24.961317 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:11:24.961328 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:11:24.961339 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:11:24.961349 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:11:24.961359 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:11:24.961370 | orchestrator | 2025-08-29 19:11:24.961381 | orchestrator | TASK [Include smartd role] ***************************************************** 2025-08-29 19:11:24.961392 | orchestrator | Friday 29 August 2025 19:11:02 +0000 (0:00:00.506) 0:07:32.082 ********* 2025-08-29 19:11:24.961402 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:11:24.961413 | orchestrator | 2025-08-29 19:11:24.961423 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-08-29 19:11:24.961455 | orchestrator | Friday 29 August 2025 19:11:03 +0000 (0:00:00.976) 0:07:33.059 ********* 2025-08-29 19:11:24.961468 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:11:24.961480 | orchestrator | 2025-08-29 19:11:24.961490 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-08-29 19:11:24.961500 | orchestrator | Friday 29 August 2025 19:11:04 +0000 (0:00:00.794) 0:07:33.853 ********* 2025-08-29 19:11:24.961509 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:11:24.961519 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:11:24.961528 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:11:24.961538 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:11:24.961547 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:11:24.961557 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:11:24.961566 | orchestrator | changed: [testbed-manager] 2025-08-29 19:11:24.961576 | orchestrator | 2025-08-29 19:11:24.961585 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-08-29 19:11:24.961595 | orchestrator | Friday 29 August 2025 19:11:12 +0000 (0:00:07.925) 0:07:41.778 ********* 2025-08-29 19:11:24.961605 | orchestrator | changed: [testbed-manager] 2025-08-29 19:11:24.961614 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:11:24.961626 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:11:24.961636 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:11:24.961647 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:11:24.961658 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:11:24.961671 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:11:24.961688 | orchestrator | 2025-08-29 19:11:24.961704 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-08-29 19:11:24.961720 | orchestrator | Friday 29 August 2025 19:11:12 +0000 (0:00:00.833) 0:07:42.612 ********* 2025-08-29 19:11:24.961736 | orchestrator | changed: [testbed-manager] 2025-08-29 19:11:24.961753 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:11:24.961772 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:11:24.961789 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:11:24.961805 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:11:24.961817 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:11:24.961832 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:11:24.961847 | orchestrator | 2025-08-29 19:11:24.961862 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-08-29 19:11:24.961879 | orchestrator | Friday 29 August 2025 19:11:14 +0000 (0:00:01.536) 0:07:44.149 ********* 2025-08-29 19:11:24.961896 | orchestrator | changed: [testbed-manager] 2025-08-29 19:11:24.961912 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:11:24.961923 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:11:24.961933 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:11:24.961942 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:11:24.961952 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:11:24.961961 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:11:24.961971 | orchestrator | 2025-08-29 19:11:24.961980 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-08-29 19:11:24.961990 | orchestrator | Friday 29 August 2025 19:11:16 +0000 (0:00:01.826) 0:07:45.976 ********* 2025-08-29 19:11:24.962000 | orchestrator | changed: [testbed-manager] 2025-08-29 19:11:24.962010 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:11:24.962077 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:11:24.962088 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:11:24.962223 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:11:24.962240 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:11:24.962250 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:11:24.962282 | orchestrator | 2025-08-29 19:11:24.962292 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-08-29 19:11:24.962335 | orchestrator | Friday 29 August 2025 19:11:17 +0000 (0:00:01.176) 0:07:47.152 ********* 2025-08-29 19:11:24.962346 | orchestrator | changed: [testbed-manager] 2025-08-29 19:11:24.962355 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:11:24.962365 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:11:24.962375 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:11:24.962384 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:11:24.962394 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:11:24.962403 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:11:24.962413 | orchestrator | 2025-08-29 19:11:24.962422 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-08-29 19:11:24.962432 | orchestrator | 2025-08-29 19:11:24.962442 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-08-29 19:11:24.962451 | orchestrator | Friday 29 August 2025 19:11:18 +0000 (0:00:01.340) 0:07:48.492 ********* 2025-08-29 19:11:24.962461 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:11:24.962471 | orchestrator | 2025-08-29 19:11:24.962481 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-08-29 19:11:24.962535 | orchestrator | Friday 29 August 2025 19:11:19 +0000 (0:00:00.820) 0:07:49.313 ********* 2025-08-29 19:11:24.962547 | orchestrator | ok: [testbed-manager] 2025-08-29 19:11:24.962557 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:11:24.962567 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:11:24.962577 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:11:24.962586 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:11:24.962595 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:11:24.962605 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:11:24.962614 | orchestrator | 2025-08-29 19:11:24.962624 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-08-29 19:11:24.962634 | orchestrator | Friday 29 August 2025 19:11:20 +0000 (0:00:00.818) 0:07:50.132 ********* 2025-08-29 19:11:24.962644 | orchestrator | changed: [testbed-manager] 2025-08-29 19:11:24.962653 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:11:24.962663 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:11:24.962677 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:11:24.962694 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:11:24.962712 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:11:24.962727 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:11:24.962742 | orchestrator | 2025-08-29 19:11:24.962760 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-08-29 19:11:24.962777 | orchestrator | Friday 29 August 2025 19:11:21 +0000 (0:00:01.390) 0:07:51.522 ********* 2025-08-29 19:11:24.962792 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:11:24.962808 | orchestrator | 2025-08-29 19:11:24.962818 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-08-29 19:11:24.962827 | orchestrator | Friday 29 August 2025 19:11:22 +0000 (0:00:00.890) 0:07:52.413 ********* 2025-08-29 19:11:24.962837 | orchestrator | ok: [testbed-manager] 2025-08-29 19:11:24.962846 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:11:24.962856 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:11:24.962866 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:11:24.962875 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:11:24.962884 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:11:24.962894 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:11:24.962903 | orchestrator | 2025-08-29 19:11:24.962913 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-08-29 19:11:24.962923 | orchestrator | Friday 29 August 2025 19:11:23 +0000 (0:00:00.864) 0:07:53.277 ********* 2025-08-29 19:11:24.962932 | orchestrator | changed: [testbed-manager] 2025-08-29 19:11:24.962942 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:11:24.962951 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:11:24.962969 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:11:24.962979 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:11:24.962988 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:11:24.962998 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:11:24.963007 | orchestrator | 2025-08-29 19:11:24.963017 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:11:24.963027 | orchestrator | testbed-manager : ok=163  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-08-29 19:11:24.963037 | orchestrator | testbed-node-0 : ok=171  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-08-29 19:11:24.963047 | orchestrator | testbed-node-1 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-08-29 19:11:24.963062 | orchestrator | testbed-node-2 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-08-29 19:11:24.963072 | orchestrator | testbed-node-3 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-08-29 19:11:24.963082 | orchestrator | testbed-node-4 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-08-29 19:11:24.963092 | orchestrator | testbed-node-5 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-08-29 19:11:24.963101 | orchestrator | 2025-08-29 19:11:24.963111 | orchestrator | 2025-08-29 19:11:24.963151 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:11:24.963161 | orchestrator | Friday 29 August 2025 19:11:24 +0000 (0:00:01.327) 0:07:54.605 ********* 2025-08-29 19:11:24.963171 | orchestrator | =============================================================================== 2025-08-29 19:11:24.963181 | orchestrator | osism.commons.packages : Install required packages --------------------- 73.94s 2025-08-29 19:11:24.963190 | orchestrator | osism.commons.packages : Download required packages -------------------- 40.04s 2025-08-29 19:11:24.963200 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.83s 2025-08-29 19:11:24.963209 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.27s 2025-08-29 19:11:24.963219 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.03s 2025-08-29 19:11:24.963229 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.92s 2025-08-29 19:11:24.963240 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.92s 2025-08-29 19:11:24.963249 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.39s 2025-08-29 19:11:24.963259 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.07s 2025-08-29 19:11:24.963268 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.40s 2025-08-29 19:11:24.963286 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 7.93s 2025-08-29 19:11:25.452609 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.82s 2025-08-29 19:11:25.452743 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.71s 2025-08-29 19:11:25.452759 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.62s 2025-08-29 19:11:25.452771 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.31s 2025-08-29 19:11:25.452782 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 7.21s 2025-08-29 19:11:25.452793 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.07s 2025-08-29 19:11:25.452803 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 5.95s 2025-08-29 19:11:25.452841 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.72s 2025-08-29 19:11:25.452853 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.64s 2025-08-29 19:11:25.774667 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-08-29 19:11:25.774771 | orchestrator | + osism apply network 2025-08-29 19:11:38.630215 | orchestrator | 2025-08-29 19:11:38 | INFO  | Task ab95d582-ce8a-4b62-85ca-a222a966ac0c (network) was prepared for execution. 2025-08-29 19:11:38.630314 | orchestrator | 2025-08-29 19:11:38 | INFO  | It takes a moment until task ab95d582-ce8a-4b62-85ca-a222a966ac0c (network) has been started and output is visible here. 2025-08-29 19:12:08.723837 | orchestrator | 2025-08-29 19:12:08.723971 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-08-29 19:12:08.723994 | orchestrator | 2025-08-29 19:12:08.724023 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-08-29 19:12:08.724036 | orchestrator | Friday 29 August 2025 19:11:43 +0000 (0:00:00.292) 0:00:00.292 ********* 2025-08-29 19:12:08.724048 | orchestrator | ok: [testbed-manager] 2025-08-29 19:12:08.724060 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:12:08.724071 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:12:08.724105 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:12:08.724116 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:12:08.724127 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:12:08.724138 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:12:08.724150 | orchestrator | 2025-08-29 19:12:08.724161 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-08-29 19:12:08.724172 | orchestrator | Friday 29 August 2025 19:11:43 +0000 (0:00:00.701) 0:00:00.994 ********* 2025-08-29 19:12:08.724186 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:12:08.724200 | orchestrator | 2025-08-29 19:12:08.724211 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-08-29 19:12:08.724222 | orchestrator | Friday 29 August 2025 19:11:44 +0000 (0:00:01.216) 0:00:02.211 ********* 2025-08-29 19:12:08.724233 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:12:08.724244 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:12:08.724255 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:12:08.724265 | orchestrator | ok: [testbed-manager] 2025-08-29 19:12:08.724276 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:12:08.724287 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:12:08.724297 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:12:08.724308 | orchestrator | 2025-08-29 19:12:08.724319 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-08-29 19:12:08.724331 | orchestrator | Friday 29 August 2025 19:11:47 +0000 (0:00:02.097) 0:00:04.309 ********* 2025-08-29 19:12:08.724341 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:12:08.724354 | orchestrator | ok: [testbed-manager] 2025-08-29 19:12:08.724367 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:12:08.724379 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:12:08.724391 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:12:08.724405 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:12:08.724417 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:12:08.724430 | orchestrator | 2025-08-29 19:12:08.724442 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-08-29 19:12:08.724455 | orchestrator | Friday 29 August 2025 19:11:48 +0000 (0:00:01.677) 0:00:05.986 ********* 2025-08-29 19:12:08.724468 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-08-29 19:12:08.724482 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-08-29 19:12:08.724494 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-08-29 19:12:08.724507 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-08-29 19:12:08.724519 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-08-29 19:12:08.724557 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-08-29 19:12:08.724570 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-08-29 19:12:08.724583 | orchestrator | 2025-08-29 19:12:08.724595 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-08-29 19:12:08.724608 | orchestrator | Friday 29 August 2025 19:11:49 +0000 (0:00:00.984) 0:00:06.970 ********* 2025-08-29 19:12:08.724621 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 19:12:08.724634 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 19:12:08.724647 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-08-29 19:12:08.724659 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 19:12:08.724672 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-08-29 19:12:08.724685 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-08-29 19:12:08.724697 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-08-29 19:12:08.724709 | orchestrator | 2025-08-29 19:12:08.724720 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-08-29 19:12:08.724731 | orchestrator | Friday 29 August 2025 19:11:53 +0000 (0:00:03.407) 0:00:10.378 ********* 2025-08-29 19:12:08.724742 | orchestrator | changed: [testbed-manager] 2025-08-29 19:12:08.724752 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:12:08.724763 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:12:08.724773 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:12:08.724784 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:12:08.724794 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:12:08.724805 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:12:08.724816 | orchestrator | 2025-08-29 19:12:08.724827 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-08-29 19:12:08.724837 | orchestrator | Friday 29 August 2025 19:11:54 +0000 (0:00:01.439) 0:00:11.818 ********* 2025-08-29 19:12:08.724848 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 19:12:08.724859 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 19:12:08.724870 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-08-29 19:12:08.724880 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-08-29 19:12:08.724891 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 19:12:08.724901 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-08-29 19:12:08.724912 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-08-29 19:12:08.724923 | orchestrator | 2025-08-29 19:12:08.724933 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-08-29 19:12:08.724944 | orchestrator | Friday 29 August 2025 19:11:56 +0000 (0:00:02.186) 0:00:14.004 ********* 2025-08-29 19:12:08.724955 | orchestrator | ok: [testbed-manager] 2025-08-29 19:12:08.724966 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:12:08.724976 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:12:08.724987 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:12:08.724997 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:12:08.725008 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:12:08.725018 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:12:08.725029 | orchestrator | 2025-08-29 19:12:08.725040 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-08-29 19:12:08.725070 | orchestrator | Friday 29 August 2025 19:11:57 +0000 (0:00:01.162) 0:00:15.167 ********* 2025-08-29 19:12:08.725149 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:12:08.725161 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:12:08.725171 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:12:08.725182 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:12:08.725193 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:12:08.725203 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:12:08.725214 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:12:08.725225 | orchestrator | 2025-08-29 19:12:08.725236 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-08-29 19:12:08.725246 | orchestrator | Friday 29 August 2025 19:11:58 +0000 (0:00:00.678) 0:00:15.845 ********* 2025-08-29 19:12:08.725257 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:12:08.725278 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:12:08.725289 | orchestrator | ok: [testbed-manager] 2025-08-29 19:12:08.725299 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:12:08.725310 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:12:08.725321 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:12:08.725331 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:12:08.725342 | orchestrator | 2025-08-29 19:12:08.725352 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-08-29 19:12:08.725363 | orchestrator | Friday 29 August 2025 19:12:00 +0000 (0:00:02.119) 0:00:17.965 ********* 2025-08-29 19:12:08.725374 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:12:08.725384 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:12:08.725395 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:12:08.725406 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:12:08.725416 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:12:08.725427 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:12:08.725439 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-08-29 19:12:08.725451 | orchestrator | 2025-08-29 19:12:08.725461 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-08-29 19:12:08.725486 | orchestrator | Friday 29 August 2025 19:12:01 +0000 (0:00:00.970) 0:00:18.936 ********* 2025-08-29 19:12:08.725498 | orchestrator | ok: [testbed-manager] 2025-08-29 19:12:08.725508 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:12:08.725519 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:12:08.725530 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:12:08.725540 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:12:08.725551 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:12:08.725561 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:12:08.725572 | orchestrator | 2025-08-29 19:12:08.725583 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-08-29 19:12:08.725593 | orchestrator | Friday 29 August 2025 19:12:04 +0000 (0:00:02.693) 0:00:21.630 ********* 2025-08-29 19:12:08.725604 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:12:08.725617 | orchestrator | 2025-08-29 19:12:08.725628 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-08-29 19:12:08.725639 | orchestrator | Friday 29 August 2025 19:12:05 +0000 (0:00:01.344) 0:00:22.974 ********* 2025-08-29 19:12:08.725649 | orchestrator | ok: [testbed-manager] 2025-08-29 19:12:08.725660 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:12:08.725671 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:12:08.725681 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:12:08.725692 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:12:08.725703 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:12:08.725713 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:12:08.725724 | orchestrator | 2025-08-29 19:12:08.725734 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-08-29 19:12:08.725745 | orchestrator | Friday 29 August 2025 19:12:06 +0000 (0:00:00.990) 0:00:23.964 ********* 2025-08-29 19:12:08.725756 | orchestrator | ok: [testbed-manager] 2025-08-29 19:12:08.725766 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:12:08.725777 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:12:08.725788 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:12:08.725798 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:12:08.725809 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:12:08.725819 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:12:08.725830 | orchestrator | 2025-08-29 19:12:08.725840 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-08-29 19:12:08.725851 | orchestrator | Friday 29 August 2025 19:12:07 +0000 (0:00:00.816) 0:00:24.781 ********* 2025-08-29 19:12:08.725862 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 19:12:08.725880 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 19:12:08.725891 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 19:12:08.725902 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 19:12:08.725912 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 19:12:08.725923 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 19:12:08.725934 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 19:12:08.725944 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 19:12:08.725955 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 19:12:08.725966 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 19:12:08.725976 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 19:12:08.725987 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 19:12:08.725998 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 19:12:08.726008 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 19:12:08.726102 | orchestrator | 2025-08-29 19:12:08.726125 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-08-29 19:12:25.016950 | orchestrator | Friday 29 August 2025 19:12:08 +0000 (0:00:01.217) 0:00:25.998 ********* 2025-08-29 19:12:25.017111 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:12:25.017131 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:12:25.017144 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:12:25.017156 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:12:25.017166 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:12:25.017177 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:12:25.017188 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:12:25.017199 | orchestrator | 2025-08-29 19:12:25.017211 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-08-29 19:12:25.017222 | orchestrator | Friday 29 August 2025 19:12:09 +0000 (0:00:00.639) 0:00:26.637 ********* 2025-08-29 19:12:25.017236 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-2, testbed-manager, testbed-node-1, testbed-node-0, testbed-node-4, testbed-node-3, testbed-node-5 2025-08-29 19:12:25.017250 | orchestrator | 2025-08-29 19:12:25.017262 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-08-29 19:12:25.017272 | orchestrator | Friday 29 August 2025 19:12:14 +0000 (0:00:04.814) 0:00:31.452 ********* 2025-08-29 19:12:25.017284 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-08-29 19:12:25.017316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-08-29 19:12:25.017329 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-08-29 19:12:25.017341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-08-29 19:12:25.017376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-08-29 19:12:25.017388 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-08-29 19:12:25.017399 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-08-29 19:12:25.017410 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-08-29 19:12:25.017421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-08-29 19:12:25.017440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-08-29 19:12:25.017451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-08-29 19:12:25.017479 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-08-29 19:12:25.017493 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-08-29 19:12:25.017507 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-08-29 19:12:25.017519 | orchestrator | 2025-08-29 19:12:25.017531 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-08-29 19:12:25.017544 | orchestrator | Friday 29 August 2025 19:12:19 +0000 (0:00:05.486) 0:00:36.939 ********* 2025-08-29 19:12:25.017557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-08-29 19:12:25.017570 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-08-29 19:12:25.017583 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-08-29 19:12:25.017605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-08-29 19:12:25.017618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-08-29 19:12:25.017631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-08-29 19:12:25.017644 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-08-29 19:12:25.017657 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-08-29 19:12:25.017670 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-08-29 19:12:25.017683 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-08-29 19:12:25.017695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-08-29 19:12:25.017708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-08-29 19:12:25.017733 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-08-29 19:12:31.275893 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-08-29 19:12:31.276013 | orchestrator | 2025-08-29 19:12:31.276031 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-08-29 19:12:31.276045 | orchestrator | Friday 29 August 2025 19:12:24 +0000 (0:00:05.346) 0:00:42.285 ********* 2025-08-29 19:12:31.276058 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:12:31.276108 | orchestrator | 2025-08-29 19:12:31.276119 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-08-29 19:12:31.276131 | orchestrator | Friday 29 August 2025 19:12:26 +0000 (0:00:01.283) 0:00:43.568 ********* 2025-08-29 19:12:31.276185 | orchestrator | ok: [testbed-manager] 2025-08-29 19:12:31.276199 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:12:31.276210 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:12:31.276221 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:12:31.276231 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:12:31.276242 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:12:31.276253 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:12:31.276263 | orchestrator | 2025-08-29 19:12:31.276275 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-08-29 19:12:31.276290 | orchestrator | Friday 29 August 2025 19:12:27 +0000 (0:00:01.160) 0:00:44.729 ********* 2025-08-29 19:12:31.276302 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 19:12:31.276314 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 19:12:31.276325 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 19:12:31.276336 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 19:12:31.276347 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:12:31.276359 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 19:12:31.276369 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 19:12:31.276380 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 19:12:31.276391 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 19:12:31.276402 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:12:31.276413 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 19:12:31.276426 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 19:12:31.276437 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 19:12:31.276449 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 19:12:31.276461 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:12:31.276473 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 19:12:31.276485 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 19:12:31.276497 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 19:12:31.276509 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 19:12:31.276521 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:12:31.276533 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 19:12:31.276545 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 19:12:31.276557 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 19:12:31.276569 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 19:12:31.276581 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:12:31.276592 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 19:12:31.276605 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 19:12:31.276616 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 19:12:31.276629 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 19:12:31.276640 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:12:31.276652 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 19:12:31.276664 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 19:12:31.276685 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 19:12:31.276697 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 19:12:31.276709 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:12:31.276721 | orchestrator | 2025-08-29 19:12:31.276733 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-08-29 19:12:31.276765 | orchestrator | Friday 29 August 2025 19:12:29 +0000 (0:00:02.073) 0:00:46.803 ********* 2025-08-29 19:12:31.276777 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:12:31.276787 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:12:31.276798 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:12:31.276809 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:12:31.276820 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:12:31.276831 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:12:31.276841 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:12:31.276852 | orchestrator | 2025-08-29 19:12:31.276863 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-08-29 19:12:31.276874 | orchestrator | Friday 29 August 2025 19:12:30 +0000 (0:00:00.655) 0:00:47.458 ********* 2025-08-29 19:12:31.276885 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:12:31.276896 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:12:31.276906 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:12:31.276917 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:12:31.276928 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:12:31.276938 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:12:31.276949 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:12:31.276959 | orchestrator | 2025-08-29 19:12:31.276970 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:12:31.276982 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 19:12:31.276995 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 19:12:31.277011 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 19:12:31.277022 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 19:12:31.277033 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 19:12:31.277044 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 19:12:31.277055 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 19:12:31.277085 | orchestrator | 2025-08-29 19:12:31.277096 | orchestrator | 2025-08-29 19:12:31.277107 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:12:31.277118 | orchestrator | Friday 29 August 2025 19:12:30 +0000 (0:00:00.709) 0:00:48.167 ********* 2025-08-29 19:12:31.277129 | orchestrator | =============================================================================== 2025-08-29 19:12:31.277140 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.49s 2025-08-29 19:12:31.277151 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.35s 2025-08-29 19:12:31.277162 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.81s 2025-08-29 19:12:31.277173 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.41s 2025-08-29 19:12:31.277184 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 2.69s 2025-08-29 19:12:31.277202 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 2.19s 2025-08-29 19:12:31.277213 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.12s 2025-08-29 19:12:31.277224 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.10s 2025-08-29 19:12:31.277234 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.07s 2025-08-29 19:12:31.277245 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.68s 2025-08-29 19:12:31.277256 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.44s 2025-08-29 19:12:31.277267 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.34s 2025-08-29 19:12:31.277277 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.28s 2025-08-29 19:12:31.277288 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.22s 2025-08-29 19:12:31.277299 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.22s 2025-08-29 19:12:31.277310 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.16s 2025-08-29 19:12:31.277321 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.16s 2025-08-29 19:12:31.277331 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.99s 2025-08-29 19:12:31.277342 | orchestrator | osism.commons.network : Create required directories --------------------- 0.98s 2025-08-29 19:12:31.277353 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.97s 2025-08-29 19:12:31.572449 | orchestrator | + osism apply wireguard 2025-08-29 19:12:43.626736 | orchestrator | 2025-08-29 19:12:43 | INFO  | Task 0ba02569-a4f4-43f2-888b-972971f351c9 (wireguard) was prepared for execution. 2025-08-29 19:12:43.626854 | orchestrator | 2025-08-29 19:12:43 | INFO  | It takes a moment until task 0ba02569-a4f4-43f2-888b-972971f351c9 (wireguard) has been started and output is visible here. 2025-08-29 19:13:03.963220 | orchestrator | 2025-08-29 19:13:03.964069 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-08-29 19:13:03.964106 | orchestrator | 2025-08-29 19:13:03.964122 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-08-29 19:13:03.964136 | orchestrator | Friday 29 August 2025 19:12:47 +0000 (0:00:00.232) 0:00:00.232 ********* 2025-08-29 19:13:03.964147 | orchestrator | ok: [testbed-manager] 2025-08-29 19:13:03.964159 | orchestrator | 2025-08-29 19:13:03.964170 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-08-29 19:13:03.964182 | orchestrator | Friday 29 August 2025 19:12:49 +0000 (0:00:01.659) 0:00:01.891 ********* 2025-08-29 19:13:03.964193 | orchestrator | changed: [testbed-manager] 2025-08-29 19:13:03.964205 | orchestrator | 2025-08-29 19:13:03.964216 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-08-29 19:13:03.964227 | orchestrator | Friday 29 August 2025 19:12:56 +0000 (0:00:06.827) 0:00:08.718 ********* 2025-08-29 19:13:03.964238 | orchestrator | changed: [testbed-manager] 2025-08-29 19:13:03.964249 | orchestrator | 2025-08-29 19:13:03.964260 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-08-29 19:13:03.964271 | orchestrator | Friday 29 August 2025 19:12:56 +0000 (0:00:00.570) 0:00:09.289 ********* 2025-08-29 19:13:03.964282 | orchestrator | changed: [testbed-manager] 2025-08-29 19:13:03.964293 | orchestrator | 2025-08-29 19:13:03.964304 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-08-29 19:13:03.964315 | orchestrator | Friday 29 August 2025 19:12:57 +0000 (0:00:00.426) 0:00:09.716 ********* 2025-08-29 19:13:03.964325 | orchestrator | ok: [testbed-manager] 2025-08-29 19:13:03.964336 | orchestrator | 2025-08-29 19:13:03.964366 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-08-29 19:13:03.964379 | orchestrator | Friday 29 August 2025 19:12:57 +0000 (0:00:00.562) 0:00:10.278 ********* 2025-08-29 19:13:03.964413 | orchestrator | ok: [testbed-manager] 2025-08-29 19:13:03.964424 | orchestrator | 2025-08-29 19:13:03.964435 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-08-29 19:13:03.964446 | orchestrator | Friday 29 August 2025 19:12:58 +0000 (0:00:00.522) 0:00:10.801 ********* 2025-08-29 19:13:03.964456 | orchestrator | ok: [testbed-manager] 2025-08-29 19:13:03.964467 | orchestrator | 2025-08-29 19:13:03.964478 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-08-29 19:13:03.964489 | orchestrator | Friday 29 August 2025 19:12:58 +0000 (0:00:00.424) 0:00:11.226 ********* 2025-08-29 19:13:03.964499 | orchestrator | changed: [testbed-manager] 2025-08-29 19:13:03.964510 | orchestrator | 2025-08-29 19:13:03.964521 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-08-29 19:13:03.964532 | orchestrator | Friday 29 August 2025 19:12:59 +0000 (0:00:01.231) 0:00:12.458 ********* 2025-08-29 19:13:03.964543 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 19:13:03.964554 | orchestrator | changed: [testbed-manager] 2025-08-29 19:13:03.964565 | orchestrator | 2025-08-29 19:13:03.964576 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-08-29 19:13:03.964587 | orchestrator | Friday 29 August 2025 19:13:00 +0000 (0:00:00.941) 0:00:13.399 ********* 2025-08-29 19:13:03.964598 | orchestrator | changed: [testbed-manager] 2025-08-29 19:13:03.964608 | orchestrator | 2025-08-29 19:13:03.964619 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-08-29 19:13:03.964630 | orchestrator | Friday 29 August 2025 19:13:02 +0000 (0:00:01.728) 0:00:15.128 ********* 2025-08-29 19:13:03.964641 | orchestrator | changed: [testbed-manager] 2025-08-29 19:13:03.964652 | orchestrator | 2025-08-29 19:13:03.964663 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:13:03.964674 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:13:03.964686 | orchestrator | 2025-08-29 19:13:03.964697 | orchestrator | 2025-08-29 19:13:03.964708 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:13:03.964718 | orchestrator | Friday 29 August 2025 19:13:03 +0000 (0:00:00.990) 0:00:16.118 ********* 2025-08-29 19:13:03.964729 | orchestrator | =============================================================================== 2025-08-29 19:13:03.964740 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.83s 2025-08-29 19:13:03.964751 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.73s 2025-08-29 19:13:03.964762 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.66s 2025-08-29 19:13:03.964772 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.23s 2025-08-29 19:13:03.964783 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.99s 2025-08-29 19:13:03.964794 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.94s 2025-08-29 19:13:03.964805 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.57s 2025-08-29 19:13:03.964815 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.56s 2025-08-29 19:13:03.964826 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.52s 2025-08-29 19:13:03.964837 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.43s 2025-08-29 19:13:03.964848 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2025-08-29 19:13:04.252058 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-08-29 19:13:04.295339 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-08-29 19:13:04.295401 | orchestrator | Dload Upload Total Spent Left Speed 2025-08-29 19:13:04.370082 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 190 0 --:--:-- --:--:-- --:--:-- 191 2025-08-29 19:13:04.388823 | orchestrator | + osism apply --environment custom workarounds 2025-08-29 19:13:06.324365 | orchestrator | 2025-08-29 19:13:06 | INFO  | Trying to run play workarounds in environment custom 2025-08-29 19:13:16.417534 | orchestrator | 2025-08-29 19:13:16 | INFO  | Task 87e2dea2-9441-48a5-a5d2-50d1d7abb08c (workarounds) was prepared for execution. 2025-08-29 19:13:16.417651 | orchestrator | 2025-08-29 19:13:16 | INFO  | It takes a moment until task 87e2dea2-9441-48a5-a5d2-50d1d7abb08c (workarounds) has been started and output is visible here. 2025-08-29 19:13:41.324705 | orchestrator | 2025-08-29 19:13:41.324825 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 19:13:41.324845 | orchestrator | 2025-08-29 19:13:41.324857 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-08-29 19:13:41.324869 | orchestrator | Friday 29 August 2025 19:13:20 +0000 (0:00:00.152) 0:00:00.152 ********* 2025-08-29 19:13:41.324881 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-08-29 19:13:41.324892 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-08-29 19:13:41.324902 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-08-29 19:13:41.324913 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-08-29 19:13:41.324923 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-08-29 19:13:41.324934 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-08-29 19:13:41.324958 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-08-29 19:13:41.324970 | orchestrator | 2025-08-29 19:13:41.324980 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-08-29 19:13:41.324991 | orchestrator | 2025-08-29 19:13:41.325002 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-08-29 19:13:41.325012 | orchestrator | Friday 29 August 2025 19:13:21 +0000 (0:00:00.789) 0:00:00.942 ********* 2025-08-29 19:13:41.325068 | orchestrator | ok: [testbed-manager] 2025-08-29 19:13:41.325082 | orchestrator | 2025-08-29 19:13:41.325093 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-08-29 19:13:41.325104 | orchestrator | 2025-08-29 19:13:41.325115 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-08-29 19:13:41.325126 | orchestrator | Friday 29 August 2025 19:13:23 +0000 (0:00:02.571) 0:00:03.513 ********* 2025-08-29 19:13:41.325137 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:13:41.325148 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:13:41.325159 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:13:41.325169 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:13:41.325180 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:13:41.325190 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:13:41.325201 | orchestrator | 2025-08-29 19:13:41.325212 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-08-29 19:13:41.325223 | orchestrator | 2025-08-29 19:13:41.325234 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-08-29 19:13:41.325245 | orchestrator | Friday 29 August 2025 19:13:25 +0000 (0:00:01.761) 0:00:05.275 ********* 2025-08-29 19:13:41.325258 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 19:13:41.325273 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 19:13:41.325285 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 19:13:41.325297 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 19:13:41.325310 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 19:13:41.325322 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 19:13:41.325359 | orchestrator | 2025-08-29 19:13:41.325372 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-08-29 19:13:41.325384 | orchestrator | Friday 29 August 2025 19:13:26 +0000 (0:00:01.486) 0:00:06.761 ********* 2025-08-29 19:13:41.325396 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:13:41.325408 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:13:41.325420 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:13:41.325432 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:13:41.325444 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:13:41.325456 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:13:41.325468 | orchestrator | 2025-08-29 19:13:41.325480 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-08-29 19:13:41.325492 | orchestrator | Friday 29 August 2025 19:13:30 +0000 (0:00:03.820) 0:00:10.582 ********* 2025-08-29 19:13:41.325504 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:13:41.325516 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:13:41.325528 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:13:41.325540 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:13:41.325552 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:13:41.325563 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:13:41.325575 | orchestrator | 2025-08-29 19:13:41.325587 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-08-29 19:13:41.325600 | orchestrator | 2025-08-29 19:13:41.325611 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-08-29 19:13:41.325621 | orchestrator | Friday 29 August 2025 19:13:31 +0000 (0:00:00.770) 0:00:11.352 ********* 2025-08-29 19:13:41.325632 | orchestrator | changed: [testbed-manager] 2025-08-29 19:13:41.325643 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:13:41.325654 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:13:41.325664 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:13:41.325675 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:13:41.325686 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:13:41.325696 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:13:41.325707 | orchestrator | 2025-08-29 19:13:41.325717 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-08-29 19:13:41.325728 | orchestrator | Friday 29 August 2025 19:13:33 +0000 (0:00:01.704) 0:00:13.057 ********* 2025-08-29 19:13:41.325739 | orchestrator | changed: [testbed-manager] 2025-08-29 19:13:41.325750 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:13:41.325760 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:13:41.325771 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:13:41.325782 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:13:41.325792 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:13:41.325820 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:13:41.325832 | orchestrator | 2025-08-29 19:13:41.325843 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-08-29 19:13:41.325854 | orchestrator | Friday 29 August 2025 19:13:34 +0000 (0:00:01.626) 0:00:14.683 ********* 2025-08-29 19:13:41.325865 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:13:41.325876 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:13:41.325886 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:13:41.325897 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:13:41.325908 | orchestrator | ok: [testbed-manager] 2025-08-29 19:13:41.325918 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:13:41.325929 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:13:41.325940 | orchestrator | 2025-08-29 19:13:41.325951 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-08-29 19:13:41.325961 | orchestrator | Friday 29 August 2025 19:13:36 +0000 (0:00:01.439) 0:00:16.123 ********* 2025-08-29 19:13:41.325972 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:13:41.325983 | orchestrator | changed: [testbed-manager] 2025-08-29 19:13:41.325994 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:13:41.326067 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:13:41.326083 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:13:41.326094 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:13:41.326105 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:13:41.326116 | orchestrator | 2025-08-29 19:13:41.326127 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-08-29 19:13:41.326137 | orchestrator | Friday 29 August 2025 19:13:38 +0000 (0:00:01.682) 0:00:17.805 ********* 2025-08-29 19:13:41.326148 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:13:41.326159 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:13:41.326169 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:13:41.326180 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:13:41.326191 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:13:41.326201 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:13:41.326212 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:13:41.326222 | orchestrator | 2025-08-29 19:13:41.326233 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-08-29 19:13:41.326244 | orchestrator | 2025-08-29 19:13:41.326255 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-08-29 19:13:41.326265 | orchestrator | Friday 29 August 2025 19:13:38 +0000 (0:00:00.635) 0:00:18.440 ********* 2025-08-29 19:13:41.326276 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:13:41.326287 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:13:41.326297 | orchestrator | ok: [testbed-manager] 2025-08-29 19:13:41.326308 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:13:41.326319 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:13:41.326329 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:13:41.326340 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:13:41.326351 | orchestrator | 2025-08-29 19:13:41.326362 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:13:41.326374 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 19:13:41.326386 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 19:13:41.326397 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 19:13:41.326408 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 19:13:41.326419 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 19:13:41.326429 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 19:13:41.326440 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 19:13:41.326450 | orchestrator | 2025-08-29 19:13:41.326461 | orchestrator | 2025-08-29 19:13:41.326472 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:13:41.326483 | orchestrator | Friday 29 August 2025 19:13:41 +0000 (0:00:02.641) 0:00:21.081 ********* 2025-08-29 19:13:41.326494 | orchestrator | =============================================================================== 2025-08-29 19:13:41.326504 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.82s 2025-08-29 19:13:41.326515 | orchestrator | Install python3-docker -------------------------------------------------- 2.64s 2025-08-29 19:13:41.326526 | orchestrator | Apply netplan configuration --------------------------------------------- 2.57s 2025-08-29 19:13:41.326537 | orchestrator | Apply netplan configuration --------------------------------------------- 1.76s 2025-08-29 19:13:41.326555 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.70s 2025-08-29 19:13:41.326566 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.68s 2025-08-29 19:13:41.326576 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.63s 2025-08-29 19:13:41.326587 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.49s 2025-08-29 19:13:41.326598 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.44s 2025-08-29 19:13:41.326608 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.79s 2025-08-29 19:13:41.326619 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.77s 2025-08-29 19:13:41.326638 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.64s 2025-08-29 19:13:41.992283 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-08-29 19:13:54.048965 | orchestrator | 2025-08-29 19:13:54 | INFO  | Task d604257e-e093-49eb-ad23-b3bc0e147cfa (reboot) was prepared for execution. 2025-08-29 19:13:54.049141 | orchestrator | 2025-08-29 19:13:54 | INFO  | It takes a moment until task d604257e-e093-49eb-ad23-b3bc0e147cfa (reboot) has been started and output is visible here. 2025-08-29 19:14:03.838446 | orchestrator | 2025-08-29 19:14:03.838563 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 19:14:03.838581 | orchestrator | 2025-08-29 19:14:03.838594 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 19:14:03.838606 | orchestrator | Friday 29 August 2025 19:13:58 +0000 (0:00:00.166) 0:00:00.166 ********* 2025-08-29 19:14:03.838633 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:14:03.838645 | orchestrator | 2025-08-29 19:14:03.838656 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 19:14:03.838667 | orchestrator | Friday 29 August 2025 19:13:58 +0000 (0:00:00.091) 0:00:00.258 ********* 2025-08-29 19:14:03.838678 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:14:03.838689 | orchestrator | 2025-08-29 19:14:03.838699 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 19:14:03.838710 | orchestrator | Friday 29 August 2025 19:13:59 +0000 (0:00:00.896) 0:00:01.155 ********* 2025-08-29 19:14:03.838721 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:14:03.838732 | orchestrator | 2025-08-29 19:14:03.838742 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 19:14:03.838753 | orchestrator | 2025-08-29 19:14:03.838764 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 19:14:03.838775 | orchestrator | Friday 29 August 2025 19:13:59 +0000 (0:00:00.094) 0:00:01.249 ********* 2025-08-29 19:14:03.838786 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:14:03.838797 | orchestrator | 2025-08-29 19:14:03.838807 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 19:14:03.838818 | orchestrator | Friday 29 August 2025 19:13:59 +0000 (0:00:00.083) 0:00:01.333 ********* 2025-08-29 19:14:03.838829 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:14:03.838839 | orchestrator | 2025-08-29 19:14:03.838850 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 19:14:03.838861 | orchestrator | Friday 29 August 2025 19:13:59 +0000 (0:00:00.653) 0:00:01.986 ********* 2025-08-29 19:14:03.838871 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:14:03.838882 | orchestrator | 2025-08-29 19:14:03.838892 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 19:14:03.838903 | orchestrator | 2025-08-29 19:14:03.838914 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 19:14:03.838924 | orchestrator | Friday 29 August 2025 19:13:59 +0000 (0:00:00.100) 0:00:02.086 ********* 2025-08-29 19:14:03.838935 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:14:03.838946 | orchestrator | 2025-08-29 19:14:03.838957 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 19:14:03.838990 | orchestrator | Friday 29 August 2025 19:14:00 +0000 (0:00:00.161) 0:00:02.248 ********* 2025-08-29 19:14:03.839004 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:14:03.839043 | orchestrator | 2025-08-29 19:14:03.839054 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 19:14:03.839065 | orchestrator | Friday 29 August 2025 19:14:00 +0000 (0:00:00.661) 0:00:02.910 ********* 2025-08-29 19:14:03.839076 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:14:03.839086 | orchestrator | 2025-08-29 19:14:03.839097 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 19:14:03.839107 | orchestrator | 2025-08-29 19:14:03.839118 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 19:14:03.839129 | orchestrator | Friday 29 August 2025 19:14:00 +0000 (0:00:00.100) 0:00:03.011 ********* 2025-08-29 19:14:03.839139 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:14:03.839150 | orchestrator | 2025-08-29 19:14:03.839160 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 19:14:03.839171 | orchestrator | Friday 29 August 2025 19:14:01 +0000 (0:00:00.104) 0:00:03.116 ********* 2025-08-29 19:14:03.839182 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:14:03.839192 | orchestrator | 2025-08-29 19:14:03.839203 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 19:14:03.839214 | orchestrator | Friday 29 August 2025 19:14:01 +0000 (0:00:00.644) 0:00:03.761 ********* 2025-08-29 19:14:03.839225 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:14:03.839235 | orchestrator | 2025-08-29 19:14:03.839246 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 19:14:03.839256 | orchestrator | 2025-08-29 19:14:03.839267 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 19:14:03.839278 | orchestrator | Friday 29 August 2025 19:14:01 +0000 (0:00:00.115) 0:00:03.876 ********* 2025-08-29 19:14:03.839288 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:14:03.839299 | orchestrator | 2025-08-29 19:14:03.839309 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 19:14:03.839320 | orchestrator | Friday 29 August 2025 19:14:01 +0000 (0:00:00.100) 0:00:03.976 ********* 2025-08-29 19:14:03.839331 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:14:03.839341 | orchestrator | 2025-08-29 19:14:03.839352 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 19:14:03.839363 | orchestrator | Friday 29 August 2025 19:14:02 +0000 (0:00:00.674) 0:00:04.651 ********* 2025-08-29 19:14:03.839373 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:14:03.839384 | orchestrator | 2025-08-29 19:14:03.839395 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 19:14:03.839405 | orchestrator | 2025-08-29 19:14:03.839416 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 19:14:03.839426 | orchestrator | Friday 29 August 2025 19:14:02 +0000 (0:00:00.118) 0:00:04.770 ********* 2025-08-29 19:14:03.839437 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:14:03.839447 | orchestrator | 2025-08-29 19:14:03.839458 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 19:14:03.839468 | orchestrator | Friday 29 August 2025 19:14:02 +0000 (0:00:00.118) 0:00:04.889 ********* 2025-08-29 19:14:03.839479 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:14:03.839490 | orchestrator | 2025-08-29 19:14:03.839500 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 19:14:03.839510 | orchestrator | Friday 29 August 2025 19:14:03 +0000 (0:00:00.683) 0:00:05.573 ********* 2025-08-29 19:14:03.839538 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:14:03.839549 | orchestrator | 2025-08-29 19:14:03.839560 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:14:03.839571 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 19:14:03.839592 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 19:14:03.839603 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 19:14:03.839614 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 19:14:03.839625 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 19:14:03.839635 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 19:14:03.839646 | orchestrator | 2025-08-29 19:14:03.839656 | orchestrator | 2025-08-29 19:14:03.839667 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:14:03.839678 | orchestrator | Friday 29 August 2025 19:14:03 +0000 (0:00:00.040) 0:00:05.614 ********* 2025-08-29 19:14:03.839689 | orchestrator | =============================================================================== 2025-08-29 19:14:03.839699 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.22s 2025-08-29 19:14:03.839710 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.66s 2025-08-29 19:14:03.839721 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.57s 2025-08-29 19:14:04.152815 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-08-29 19:14:16.148229 | orchestrator | 2025-08-29 19:14:16 | INFO  | Task 0f3ab4b4-b51f-40f7-906b-70e187fd887f (wait-for-connection) was prepared for execution. 2025-08-29 19:14:16.148360 | orchestrator | 2025-08-29 19:14:16 | INFO  | It takes a moment until task 0f3ab4b4-b51f-40f7-906b-70e187fd887f (wait-for-connection) has been started and output is visible here. 2025-08-29 19:14:32.278452 | orchestrator | 2025-08-29 19:14:32.278569 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-08-29 19:14:32.278587 | orchestrator | 2025-08-29 19:14:32.278620 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-08-29 19:14:32.278632 | orchestrator | Friday 29 August 2025 19:14:20 +0000 (0:00:00.245) 0:00:00.245 ********* 2025-08-29 19:14:32.278643 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:14:32.278655 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:14:32.278666 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:14:32.278676 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:14:32.278687 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:14:32.278698 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:14:32.278708 | orchestrator | 2025-08-29 19:14:32.278719 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:14:32.278731 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:14:32.278744 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:14:32.278755 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:14:32.278765 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:14:32.278776 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:14:32.278787 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:14:32.278821 | orchestrator | 2025-08-29 19:14:32.278832 | orchestrator | 2025-08-29 19:14:32.278843 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:14:32.278854 | orchestrator | Friday 29 August 2025 19:14:31 +0000 (0:00:11.555) 0:00:11.800 ********* 2025-08-29 19:14:32.278864 | orchestrator | =============================================================================== 2025-08-29 19:14:32.278875 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.56s 2025-08-29 19:14:32.596371 | orchestrator | + osism apply hddtemp 2025-08-29 19:14:44.652172 | orchestrator | 2025-08-29 19:14:44 | INFO  | Task e201af5b-d24f-44d0-9c83-678900a6773f (hddtemp) was prepared for execution. 2025-08-29 19:14:44.652287 | orchestrator | 2025-08-29 19:14:44 | INFO  | It takes a moment until task e201af5b-d24f-44d0-9c83-678900a6773f (hddtemp) has been started and output is visible here. 2025-08-29 19:15:12.060470 | orchestrator | 2025-08-29 19:15:12.060612 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-08-29 19:15:12.060631 | orchestrator | 2025-08-29 19:15:12.060643 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-08-29 19:15:12.060655 | orchestrator | Friday 29 August 2025 19:14:48 +0000 (0:00:00.263) 0:00:00.263 ********* 2025-08-29 19:15:12.060667 | orchestrator | ok: [testbed-manager] 2025-08-29 19:15:12.060679 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:15:12.060690 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:15:12.060701 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:15:12.060729 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:15:12.060741 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:15:12.060751 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:15:12.060762 | orchestrator | 2025-08-29 19:15:12.060773 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-08-29 19:15:12.060784 | orchestrator | Friday 29 August 2025 19:14:49 +0000 (0:00:00.703) 0:00:00.966 ********* 2025-08-29 19:15:12.060798 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:15:12.060812 | orchestrator | 2025-08-29 19:15:12.060824 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-08-29 19:15:12.060835 | orchestrator | Friday 29 August 2025 19:14:50 +0000 (0:00:01.183) 0:00:02.150 ********* 2025-08-29 19:15:12.060846 | orchestrator | ok: [testbed-manager] 2025-08-29 19:15:12.060856 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:15:12.060867 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:15:12.060878 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:15:12.060889 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:15:12.060899 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:15:12.060910 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:15:12.060921 | orchestrator | 2025-08-29 19:15:12.060932 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-08-29 19:15:12.060942 | orchestrator | Friday 29 August 2025 19:14:52 +0000 (0:00:01.911) 0:00:04.062 ********* 2025-08-29 19:15:12.060953 | orchestrator | changed: [testbed-manager] 2025-08-29 19:15:12.060965 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:15:12.060976 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:15:12.061033 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:15:12.061045 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:15:12.061058 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:15:12.061070 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:15:12.061082 | orchestrator | 2025-08-29 19:15:12.061095 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-08-29 19:15:12.061107 | orchestrator | Friday 29 August 2025 19:14:53 +0000 (0:00:01.131) 0:00:05.193 ********* 2025-08-29 19:15:12.061121 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:15:12.061133 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:15:12.061145 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:15:12.061181 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:15:12.061194 | orchestrator | ok: [testbed-manager] 2025-08-29 19:15:12.061207 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:15:12.061219 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:15:12.061232 | orchestrator | 2025-08-29 19:15:12.061244 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-08-29 19:15:12.061257 | orchestrator | Friday 29 August 2025 19:14:55 +0000 (0:00:02.073) 0:00:07.267 ********* 2025-08-29 19:15:12.061269 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:15:12.061281 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:15:12.061293 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:15:12.061306 | orchestrator | changed: [testbed-manager] 2025-08-29 19:15:12.061317 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:15:12.061328 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:15:12.061338 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:15:12.061349 | orchestrator | 2025-08-29 19:15:12.061360 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-08-29 19:15:12.061370 | orchestrator | Friday 29 August 2025 19:14:56 +0000 (0:00:00.855) 0:00:08.123 ********* 2025-08-29 19:15:12.061381 | orchestrator | changed: [testbed-manager] 2025-08-29 19:15:12.061392 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:15:12.061403 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:15:12.061413 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:15:12.061424 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:15:12.061434 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:15:12.061451 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:15:12.061471 | orchestrator | 2025-08-29 19:15:12.061491 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-08-29 19:15:12.061509 | orchestrator | Friday 29 August 2025 19:15:08 +0000 (0:00:11.742) 0:00:19.865 ********* 2025-08-29 19:15:12.061531 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:15:12.061551 | orchestrator | 2025-08-29 19:15:12.061569 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-08-29 19:15:12.061581 | orchestrator | Friday 29 August 2025 19:15:09 +0000 (0:00:01.395) 0:00:21.261 ********* 2025-08-29 19:15:12.061592 | orchestrator | changed: [testbed-manager] 2025-08-29 19:15:12.061602 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:15:12.061613 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:15:12.061623 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:15:12.061634 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:15:12.061644 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:15:12.061655 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:15:12.061665 | orchestrator | 2025-08-29 19:15:12.061676 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:15:12.061687 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:15:12.061718 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 19:15:12.061731 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 19:15:12.061742 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 19:15:12.061759 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 19:15:12.061771 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 19:15:12.061791 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 19:15:12.061802 | orchestrator | 2025-08-29 19:15:12.061813 | orchestrator | 2025-08-29 19:15:12.061824 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:15:12.061835 | orchestrator | Friday 29 August 2025 19:15:11 +0000 (0:00:01.927) 0:00:23.188 ********* 2025-08-29 19:15:12.061846 | orchestrator | =============================================================================== 2025-08-29 19:15:12.061857 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 11.74s 2025-08-29 19:15:12.061867 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 2.07s 2025-08-29 19:15:12.061878 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.93s 2025-08-29 19:15:12.061889 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.91s 2025-08-29 19:15:12.061899 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.40s 2025-08-29 19:15:12.061910 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.18s 2025-08-29 19:15:12.061921 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.13s 2025-08-29 19:15:12.061931 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.86s 2025-08-29 19:15:12.061942 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.70s 2025-08-29 19:15:12.386470 | orchestrator | ++ semver latest 7.1.1 2025-08-29 19:15:12.439077 | orchestrator | + [[ -1 -ge 0 ]] 2025-08-29 19:15:12.439146 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-08-29 19:15:12.439159 | orchestrator | + sudo systemctl restart manager.service 2025-08-29 19:15:26.068938 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-08-29 19:15:26.069066 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-08-29 19:15:26.069081 | orchestrator | + local max_attempts=60 2025-08-29 19:15:26.069091 | orchestrator | + local name=ceph-ansible 2025-08-29 19:15:26.069100 | orchestrator | + local attempt_num=1 2025-08-29 19:15:26.069110 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 19:15:26.106614 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 19:15:26.106677 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 19:15:26.106902 | orchestrator | + sleep 5 2025-08-29 19:15:31.111776 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 19:15:31.258278 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 19:15:31.258369 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 19:15:31.258385 | orchestrator | + sleep 5 2025-08-29 19:15:36.261902 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 19:15:36.299723 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 19:15:36.300117 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 19:15:36.300143 | orchestrator | + sleep 5 2025-08-29 19:15:41.305091 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 19:15:41.344336 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 19:15:41.344525 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 19:15:41.344546 | orchestrator | + sleep 5 2025-08-29 19:15:46.350373 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 19:15:46.388499 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 19:15:46.388903 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 19:15:46.388936 | orchestrator | + sleep 5 2025-08-29 19:15:51.394903 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 19:15:51.436773 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 19:15:51.436873 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 19:15:51.436889 | orchestrator | + sleep 5 2025-08-29 19:15:56.442415 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 19:15:56.480447 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 19:15:56.480527 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 19:15:56.480542 | orchestrator | + sleep 5 2025-08-29 19:16:01.485340 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 19:16:01.528833 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 19:16:01.528931 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 19:16:01.528947 | orchestrator | + sleep 5 2025-08-29 19:16:06.530704 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 19:16:06.570457 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 19:16:06.570539 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 19:16:06.570556 | orchestrator | + sleep 5 2025-08-29 19:16:11.574199 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 19:16:11.611893 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 19:16:11.611978 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 19:16:11.611994 | orchestrator | + sleep 5 2025-08-29 19:16:16.617316 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 19:16:16.657198 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 19:16:16.657264 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 19:16:16.657279 | orchestrator | + sleep 5 2025-08-29 19:16:21.662510 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 19:16:21.699883 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 19:16:21.699989 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 19:16:21.700004 | orchestrator | + sleep 5 2025-08-29 19:16:26.705324 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 19:16:26.743027 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 19:16:26.743103 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 19:16:26.743117 | orchestrator | + sleep 5 2025-08-29 19:16:31.747560 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 19:16:31.790265 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 19:16:31.790350 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-08-29 19:16:31.790368 | orchestrator | + local max_attempts=60 2025-08-29 19:16:31.790381 | orchestrator | + local name=kolla-ansible 2025-08-29 19:16:31.790392 | orchestrator | + local attempt_num=1 2025-08-29 19:16:31.790404 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-08-29 19:16:31.829319 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 19:16:31.829385 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-08-29 19:16:31.829399 | orchestrator | + local max_attempts=60 2025-08-29 19:16:31.829411 | orchestrator | + local name=osism-ansible 2025-08-29 19:16:31.829422 | orchestrator | + local attempt_num=1 2025-08-29 19:16:31.830175 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-08-29 19:16:31.873290 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 19:16:31.873347 | orchestrator | + [[ true == \t\r\u\e ]] 2025-08-29 19:16:31.873360 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-08-29 19:16:32.063763 | orchestrator | ARA in ceph-ansible already disabled. 2025-08-29 19:16:32.237747 | orchestrator | ARA in kolla-ansible already disabled. 2025-08-29 19:16:32.416748 | orchestrator | ARA in osism-ansible already disabled. 2025-08-29 19:16:32.573530 | orchestrator | ARA in osism-kubernetes already disabled. 2025-08-29 19:16:32.574562 | orchestrator | + osism apply gather-facts 2025-08-29 19:16:44.652437 | orchestrator | 2025-08-29 19:16:44 | INFO  | Task 76819f62-64a9-474a-b34c-212c0db0fd1c (gather-facts) was prepared for execution. 2025-08-29 19:16:44.652557 | orchestrator | 2025-08-29 19:16:44 | INFO  | It takes a moment until task 76819f62-64a9-474a-b34c-212c0db0fd1c (gather-facts) has been started and output is visible here. 2025-08-29 19:16:57.938258 | orchestrator | 2025-08-29 19:16:57.938372 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 19:16:57.938388 | orchestrator | 2025-08-29 19:16:57.938400 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 19:16:57.938412 | orchestrator | Friday 29 August 2025 19:16:48 +0000 (0:00:00.202) 0:00:00.202 ********* 2025-08-29 19:16:57.938423 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:16:57.938434 | orchestrator | ok: [testbed-manager] 2025-08-29 19:16:57.938445 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:16:57.938455 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:16:57.938466 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:16:57.938503 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:16:57.938514 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:16:57.938525 | orchestrator | 2025-08-29 19:16:57.938536 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-08-29 19:16:57.938546 | orchestrator | 2025-08-29 19:16:57.938557 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-08-29 19:16:57.938568 | orchestrator | Friday 29 August 2025 19:16:56 +0000 (0:00:08.483) 0:00:08.686 ********* 2025-08-29 19:16:57.938579 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:16:57.938590 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:16:57.938601 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:16:57.938612 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:16:57.938622 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:16:57.938633 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:16:57.938643 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:16:57.938654 | orchestrator | 2025-08-29 19:16:57.938664 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:16:57.938675 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 19:16:57.938687 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 19:16:57.938698 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 19:16:57.938709 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 19:16:57.938719 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 19:16:57.938730 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 19:16:57.938740 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 19:16:57.938751 | orchestrator | 2025-08-29 19:16:57.938762 | orchestrator | 2025-08-29 19:16:57.938773 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:16:57.938783 | orchestrator | Friday 29 August 2025 19:16:57 +0000 (0:00:00.566) 0:00:09.252 ********* 2025-08-29 19:16:57.938795 | orchestrator | =============================================================================== 2025-08-29 19:16:57.938807 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.48s 2025-08-29 19:16:57.938819 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.57s 2025-08-29 19:16:58.312503 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-08-29 19:16:58.333404 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-08-29 19:16:58.351096 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-08-29 19:16:58.370510 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-08-29 19:16:58.386644 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-08-29 19:16:58.409274 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-08-29 19:16:58.422547 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-08-29 19:16:58.434943 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-08-29 19:16:58.446110 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-08-29 19:16:58.469411 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-08-29 19:16:58.487462 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-08-29 19:16:58.507189 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-08-29 19:16:58.525652 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-08-29 19:16:58.544095 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-08-29 19:16:58.562798 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-08-29 19:16:58.575877 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-08-29 19:16:58.588109 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-08-29 19:16:58.607343 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-08-29 19:16:58.622595 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-08-29 19:16:58.635044 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-08-29 19:16:58.647136 | orchestrator | + [[ false == \t\r\u\e ]] 2025-08-29 19:16:58.750824 | orchestrator | ok: Runtime: 0:23:27.258082 2025-08-29 19:16:58.844878 | 2025-08-29 19:16:58.845114 | TASK [Deploy services] 2025-08-29 19:16:59.376391 | orchestrator | skipping: Conditional result was False 2025-08-29 19:16:59.395511 | 2025-08-29 19:16:59.395701 | TASK [Deploy in a nutshell] 2025-08-29 19:17:00.099563 | orchestrator | + set -e 2025-08-29 19:17:00.099686 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-08-29 19:17:00.099697 | orchestrator | ++ export INTERACTIVE=false 2025-08-29 19:17:00.099706 | orchestrator | ++ INTERACTIVE=false 2025-08-29 19:17:00.099711 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-08-29 19:17:00.099716 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-08-29 19:17:00.099722 | orchestrator | + source /opt/manager-vars.sh 2025-08-29 19:17:00.099745 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-08-29 19:17:00.099757 | orchestrator | ++ NUMBER_OF_NODES=6 2025-08-29 19:17:00.099762 | orchestrator | ++ export CEPH_VERSION=reef 2025-08-29 19:17:00.099776 | orchestrator | ++ CEPH_VERSION=reef 2025-08-29 19:17:00.099781 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-08-29 19:17:00.099788 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-08-29 19:17:00.099792 | orchestrator | ++ export MANAGER_VERSION=latest 2025-08-29 19:17:00.099803 | orchestrator | ++ MANAGER_VERSION=latest 2025-08-29 19:17:00.099807 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-08-29 19:17:00.099812 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-08-29 19:17:00.099816 | orchestrator | ++ export ARA=false 2025-08-29 19:17:00.099820 | orchestrator | ++ ARA=false 2025-08-29 19:17:00.099824 | orchestrator | ++ export DEPLOY_MODE=manager 2025-08-29 19:17:00.099828 | orchestrator | ++ DEPLOY_MODE=manager 2025-08-29 19:17:00.099832 | orchestrator | ++ export TEMPEST=false 2025-08-29 19:17:00.099836 | orchestrator | ++ TEMPEST=false 2025-08-29 19:17:00.099851 | orchestrator | ++ export IS_ZUUL=true 2025-08-29 19:17:00.099855 | orchestrator | ++ IS_ZUUL=true 2025-08-29 19:17:00.099859 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.11 2025-08-29 19:17:00.099863 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.11 2025-08-29 19:17:00.099867 | orchestrator | ++ export EXTERNAL_API=false 2025-08-29 19:17:00.099870 | orchestrator | ++ EXTERNAL_API=false 2025-08-29 19:17:00.099876 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-08-29 19:17:00.099880 | orchestrator | ++ IMAGE_USER=ubuntu 2025-08-29 19:17:00.099884 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-08-29 19:17:00.099888 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-08-29 19:17:00.099892 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-08-29 19:17:00.099895 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-08-29 19:17:00.099899 | orchestrator | + echo 2025-08-29 19:17:00.099998 | orchestrator | 2025-08-29 19:17:00.100004 | orchestrator | # PULL IMAGES 2025-08-29 19:17:00.100008 | orchestrator | 2025-08-29 19:17:00.100012 | orchestrator | + echo '# PULL IMAGES' 2025-08-29 19:17:00.100016 | orchestrator | + echo 2025-08-29 19:17:00.101318 | orchestrator | ++ semver latest 7.0.0 2025-08-29 19:17:00.156198 | orchestrator | + [[ -1 -ge 0 ]] 2025-08-29 19:17:00.156232 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-08-29 19:17:00.156249 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-08-29 19:17:02.086656 | orchestrator | 2025-08-29 19:17:02 | INFO  | Trying to run play pull-images in environment custom 2025-08-29 19:17:12.190549 | orchestrator | 2025-08-29 19:17:12 | INFO  | Task c507f237-8fde-4969-b33d-d3237f29e0f0 (pull-images) was prepared for execution. 2025-08-29 19:17:12.190682 | orchestrator | 2025-08-29 19:17:12 | INFO  | Task c507f237-8fde-4969-b33d-d3237f29e0f0 is running in background. No more output. Check ARA for logs. 2025-08-29 19:17:14.511991 | orchestrator | 2025-08-29 19:17:14 | INFO  | Trying to run play wipe-partitions in environment custom 2025-08-29 19:17:24.647586 | orchestrator | 2025-08-29 19:17:24 | INFO  | Task cd81cd14-7fb2-44ab-b20c-1cc2b7ca0070 (wipe-partitions) was prepared for execution. 2025-08-29 19:17:24.647695 | orchestrator | 2025-08-29 19:17:24 | INFO  | It takes a moment until task cd81cd14-7fb2-44ab-b20c-1cc2b7ca0070 (wipe-partitions) has been started and output is visible here. 2025-08-29 19:17:37.737426 | orchestrator | 2025-08-29 19:17:37.737542 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-08-29 19:17:37.737560 | orchestrator | 2025-08-29 19:17:37.737573 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-08-29 19:17:37.737589 | orchestrator | Friday 29 August 2025 19:17:29 +0000 (0:00:00.143) 0:00:00.143 ********* 2025-08-29 19:17:37.737603 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:17:37.737615 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:17:37.737626 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:17:37.737637 | orchestrator | 2025-08-29 19:17:37.737649 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-08-29 19:17:37.737690 | orchestrator | Friday 29 August 2025 19:17:29 +0000 (0:00:00.588) 0:00:00.731 ********* 2025-08-29 19:17:37.737702 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:17:37.737713 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:17:37.737728 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:17:37.737740 | orchestrator | 2025-08-29 19:17:37.737751 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-08-29 19:17:37.737762 | orchestrator | Friday 29 August 2025 19:17:30 +0000 (0:00:00.317) 0:00:01.049 ********* 2025-08-29 19:17:37.737829 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:17:37.737842 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:17:37.737860 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:17:37.737878 | orchestrator | 2025-08-29 19:17:37.737898 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-08-29 19:17:37.737926 | orchestrator | Friday 29 August 2025 19:17:31 +0000 (0:00:00.767) 0:00:01.816 ********* 2025-08-29 19:17:37.737948 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:17:37.737967 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:17:37.737987 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:17:37.738004 | orchestrator | 2025-08-29 19:17:37.738091 | orchestrator | TASK [Check device availability] *********************************************** 2025-08-29 19:17:37.738105 | orchestrator | Friday 29 August 2025 19:17:31 +0000 (0:00:00.277) 0:00:02.094 ********* 2025-08-29 19:17:37.738118 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-08-29 19:17:37.738136 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-08-29 19:17:37.738149 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-08-29 19:17:37.738161 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-08-29 19:17:37.738174 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-08-29 19:17:37.738187 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-08-29 19:17:37.738198 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-08-29 19:17:37.738211 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-08-29 19:17:37.738224 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-08-29 19:17:37.738236 | orchestrator | 2025-08-29 19:17:37.738248 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-08-29 19:17:37.738262 | orchestrator | Friday 29 August 2025 19:17:32 +0000 (0:00:01.207) 0:00:03.302 ********* 2025-08-29 19:17:37.738275 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-08-29 19:17:37.738288 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-08-29 19:17:37.738299 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-08-29 19:17:37.738310 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-08-29 19:17:37.738321 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-08-29 19:17:37.738332 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-08-29 19:17:37.738343 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-08-29 19:17:37.738354 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-08-29 19:17:37.738364 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-08-29 19:17:37.738375 | orchestrator | 2025-08-29 19:17:37.738386 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-08-29 19:17:37.738397 | orchestrator | Friday 29 August 2025 19:17:33 +0000 (0:00:01.355) 0:00:04.657 ********* 2025-08-29 19:17:37.738408 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-08-29 19:17:37.738419 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-08-29 19:17:37.738430 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-08-29 19:17:37.738440 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-08-29 19:17:37.738451 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-08-29 19:17:37.738469 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-08-29 19:17:37.738481 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-08-29 19:17:37.738503 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-08-29 19:17:37.738514 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-08-29 19:17:37.738525 | orchestrator | 2025-08-29 19:17:37.738536 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-08-29 19:17:37.738547 | orchestrator | Friday 29 August 2025 19:17:36 +0000 (0:00:02.217) 0:00:06.875 ********* 2025-08-29 19:17:37.738558 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:17:37.738568 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:17:37.738579 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:17:37.738590 | orchestrator | 2025-08-29 19:17:37.738601 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-08-29 19:17:37.738612 | orchestrator | Friday 29 August 2025 19:17:36 +0000 (0:00:00.618) 0:00:07.493 ********* 2025-08-29 19:17:37.738623 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:17:37.738634 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:17:37.738645 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:17:37.738655 | orchestrator | 2025-08-29 19:17:37.738666 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:17:37.738680 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 19:17:37.738693 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 19:17:37.738724 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 19:17:37.738735 | orchestrator | 2025-08-29 19:17:37.738746 | orchestrator | 2025-08-29 19:17:37.738757 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:17:37.738795 | orchestrator | Friday 29 August 2025 19:17:37 +0000 (0:00:00.626) 0:00:08.120 ********* 2025-08-29 19:17:37.738815 | orchestrator | =============================================================================== 2025-08-29 19:17:37.738828 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.22s 2025-08-29 19:17:37.738838 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.36s 2025-08-29 19:17:37.738849 | orchestrator | Check device availability ----------------------------------------------- 1.21s 2025-08-29 19:17:37.738859 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.77s 2025-08-29 19:17:37.738870 | orchestrator | Request device events from the kernel ----------------------------------- 0.63s 2025-08-29 19:17:37.738881 | orchestrator | Reload udev rules ------------------------------------------------------- 0.62s 2025-08-29 19:17:37.738891 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.59s 2025-08-29 19:17:37.738902 | orchestrator | Remove all rook related logical devices --------------------------------- 0.32s 2025-08-29 19:17:37.738913 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.28s 2025-08-29 19:17:50.017185 | orchestrator | 2025-08-29 19:17:50 | INFO  | Task ceea5639-c23f-4c8c-882c-fe19f48700bc (facts) was prepared for execution. 2025-08-29 19:17:50.017293 | orchestrator | 2025-08-29 19:17:50 | INFO  | It takes a moment until task ceea5639-c23f-4c8c-882c-fe19f48700bc (facts) has been started and output is visible here. 2025-08-29 19:18:02.992771 | orchestrator | 2025-08-29 19:18:02.992888 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-08-29 19:18:02.992906 | orchestrator | 2025-08-29 19:18:02.992919 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-08-29 19:18:02.992947 | orchestrator | Friday 29 August 2025 19:17:54 +0000 (0:00:00.269) 0:00:00.269 ********* 2025-08-29 19:18:02.992959 | orchestrator | ok: [testbed-manager] 2025-08-29 19:18:02.992971 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:18:02.992982 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:18:02.993016 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:18:02.993028 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:18:02.993039 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:18:02.993050 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:18:02.993061 | orchestrator | 2025-08-29 19:18:02.993075 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-08-29 19:18:02.993086 | orchestrator | Friday 29 August 2025 19:17:55 +0000 (0:00:01.064) 0:00:01.333 ********* 2025-08-29 19:18:02.993097 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:18:02.993109 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:18:02.993120 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:18:02.993131 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:18:02.993141 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:18:02.993152 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:18:02.993163 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:18:02.993174 | orchestrator | 2025-08-29 19:18:02.993185 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 19:18:02.993195 | orchestrator | 2025-08-29 19:18:02.993206 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 19:18:02.993217 | orchestrator | Friday 29 August 2025 19:17:56 +0000 (0:00:01.286) 0:00:02.620 ********* 2025-08-29 19:18:02.993228 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:18:02.993239 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:18:02.993251 | orchestrator | ok: [testbed-manager] 2025-08-29 19:18:02.993261 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:18:02.993274 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:18:02.993293 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:18:02.993311 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:18:02.993329 | orchestrator | 2025-08-29 19:18:02.993347 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-08-29 19:18:02.993366 | orchestrator | 2025-08-29 19:18:02.993384 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-08-29 19:18:02.993425 | orchestrator | Friday 29 August 2025 19:18:01 +0000 (0:00:05.438) 0:00:08.059 ********* 2025-08-29 19:18:02.993446 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:18:02.993459 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:18:02.993472 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:18:02.993484 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:18:02.993497 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:18:02.993509 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:18:02.993521 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:18:02.993533 | orchestrator | 2025-08-29 19:18:02.993546 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:18:02.993559 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 19:18:02.993574 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 19:18:02.993586 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 19:18:02.993598 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 19:18:02.993611 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 19:18:02.993624 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 19:18:02.993637 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 19:18:02.993649 | orchestrator | 2025-08-29 19:18:02.993672 | orchestrator | 2025-08-29 19:18:02.993683 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:18:02.993694 | orchestrator | Friday 29 August 2025 19:18:02 +0000 (0:00:00.767) 0:00:08.826 ********* 2025-08-29 19:18:02.993705 | orchestrator | =============================================================================== 2025-08-29 19:18:02.993716 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.44s 2025-08-29 19:18:02.993763 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.29s 2025-08-29 19:18:02.993775 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.06s 2025-08-29 19:18:02.993786 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.77s 2025-08-29 19:18:05.268140 | orchestrator | 2025-08-29 19:18:05 | INFO  | Task c98fd347-e411-4c94-8b0e-6b339bacde36 (ceph-configure-lvm-volumes) was prepared for execution. 2025-08-29 19:18:05.268240 | orchestrator | 2025-08-29 19:18:05 | INFO  | It takes a moment until task c98fd347-e411-4c94-8b0e-6b339bacde36 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-08-29 19:18:17.345781 | orchestrator | 2025-08-29 19:18:17.345898 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-08-29 19:18:17.345915 | orchestrator | 2025-08-29 19:18:17.345930 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 19:18:17.345953 | orchestrator | Friday 29 August 2025 19:18:09 +0000 (0:00:00.360) 0:00:00.360 ********* 2025-08-29 19:18:17.345968 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 19:18:17.345979 | orchestrator | 2025-08-29 19:18:17.345990 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 19:18:17.346001 | orchestrator | Friday 29 August 2025 19:18:09 +0000 (0:00:00.245) 0:00:00.606 ********* 2025-08-29 19:18:17.346060 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:18:17.346074 | orchestrator | 2025-08-29 19:18:17.346086 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:17.346097 | orchestrator | Friday 29 August 2025 19:18:10 +0000 (0:00:00.244) 0:00:00.850 ********* 2025-08-29 19:18:17.346108 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-08-29 19:18:17.346120 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-08-29 19:18:17.346131 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-08-29 19:18:17.346142 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-08-29 19:18:17.346153 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-08-29 19:18:17.346164 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-08-29 19:18:17.346174 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-08-29 19:18:17.346185 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-08-29 19:18:17.346196 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-08-29 19:18:17.346206 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-08-29 19:18:17.346217 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-08-29 19:18:17.346237 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-08-29 19:18:17.346256 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-08-29 19:18:17.346269 | orchestrator | 2025-08-29 19:18:17.346282 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:17.346295 | orchestrator | Friday 29 August 2025 19:18:10 +0000 (0:00:00.352) 0:00:01.202 ********* 2025-08-29 19:18:17.346308 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:18:17.346341 | orchestrator | 2025-08-29 19:18:17.346354 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:17.346366 | orchestrator | Friday 29 August 2025 19:18:10 +0000 (0:00:00.496) 0:00:01.699 ********* 2025-08-29 19:18:17.346378 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:18:17.346390 | orchestrator | 2025-08-29 19:18:17.346402 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:17.346414 | orchestrator | Friday 29 August 2025 19:18:11 +0000 (0:00:00.208) 0:00:01.907 ********* 2025-08-29 19:18:17.346426 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:18:17.346439 | orchestrator | 2025-08-29 19:18:17.346451 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:17.346463 | orchestrator | Friday 29 August 2025 19:18:11 +0000 (0:00:00.212) 0:00:02.120 ********* 2025-08-29 19:18:17.346476 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:18:17.346492 | orchestrator | 2025-08-29 19:18:17.346505 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:17.346517 | orchestrator | Friday 29 August 2025 19:18:11 +0000 (0:00:00.209) 0:00:02.329 ********* 2025-08-29 19:18:17.346529 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:18:17.346541 | orchestrator | 2025-08-29 19:18:17.346554 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:17.346567 | orchestrator | Friday 29 August 2025 19:18:11 +0000 (0:00:00.199) 0:00:02.528 ********* 2025-08-29 19:18:17.346579 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:18:17.346591 | orchestrator | 2025-08-29 19:18:17.346603 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:17.346616 | orchestrator | Friday 29 August 2025 19:18:11 +0000 (0:00:00.195) 0:00:02.724 ********* 2025-08-29 19:18:17.346628 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:18:17.346639 | orchestrator | 2025-08-29 19:18:17.346650 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:17.346661 | orchestrator | Friday 29 August 2025 19:18:12 +0000 (0:00:00.211) 0:00:02.935 ********* 2025-08-29 19:18:17.346671 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:18:17.346688 | orchestrator | 2025-08-29 19:18:17.346728 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:17.346740 | orchestrator | Friday 29 August 2025 19:18:12 +0000 (0:00:00.206) 0:00:03.142 ********* 2025-08-29 19:18:17.346751 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920) 2025-08-29 19:18:17.346763 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920) 2025-08-29 19:18:17.346774 | orchestrator | 2025-08-29 19:18:17.346785 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:17.346796 | orchestrator | Friday 29 August 2025 19:18:12 +0000 (0:00:00.413) 0:00:03.555 ********* 2025-08-29 19:18:17.346824 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0a4c4485-d2ea-4599-9435-e606068873fe) 2025-08-29 19:18:17.346836 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0a4c4485-d2ea-4599-9435-e606068873fe) 2025-08-29 19:18:17.346847 | orchestrator | 2025-08-29 19:18:17.346858 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:17.346868 | orchestrator | Friday 29 August 2025 19:18:13 +0000 (0:00:00.427) 0:00:03.982 ********* 2025-08-29 19:18:17.346879 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_21104e56-4cdf-49d9-91fd-13aff314e467) 2025-08-29 19:18:17.346890 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_21104e56-4cdf-49d9-91fd-13aff314e467) 2025-08-29 19:18:17.346901 | orchestrator | 2025-08-29 19:18:17.346911 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:17.346922 | orchestrator | Friday 29 August 2025 19:18:13 +0000 (0:00:00.604) 0:00:04.587 ********* 2025-08-29 19:18:17.346933 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e5bba166-d17c-451d-864c-9f74c60a90a3) 2025-08-29 19:18:17.346953 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e5bba166-d17c-451d-864c-9f74c60a90a3) 2025-08-29 19:18:17.346972 | orchestrator | 2025-08-29 19:18:17.346984 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:17.346995 | orchestrator | Friday 29 August 2025 19:18:14 +0000 (0:00:00.622) 0:00:05.209 ********* 2025-08-29 19:18:17.347006 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 19:18:17.347016 | orchestrator | 2025-08-29 19:18:17.347027 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:17.347043 | orchestrator | Friday 29 August 2025 19:18:15 +0000 (0:00:00.761) 0:00:05.971 ********* 2025-08-29 19:18:17.347055 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-08-29 19:18:17.347065 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-08-29 19:18:17.347076 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-08-29 19:18:17.347087 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-08-29 19:18:17.347097 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-08-29 19:18:17.347108 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-08-29 19:18:17.347118 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-08-29 19:18:17.347129 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-08-29 19:18:17.347140 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-08-29 19:18:17.347151 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-08-29 19:18:17.347161 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-08-29 19:18:17.347172 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-08-29 19:18:17.347182 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-08-29 19:18:17.347193 | orchestrator | 2025-08-29 19:18:17.347204 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:17.347214 | orchestrator | Friday 29 August 2025 19:18:15 +0000 (0:00:00.377) 0:00:06.348 ********* 2025-08-29 19:18:17.347225 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:18:17.347236 | orchestrator | 2025-08-29 19:18:17.347246 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:17.347257 | orchestrator | Friday 29 August 2025 19:18:15 +0000 (0:00:00.209) 0:00:06.557 ********* 2025-08-29 19:18:17.347267 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:18:17.347278 | orchestrator | 2025-08-29 19:18:17.347289 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:17.347300 | orchestrator | Friday 29 August 2025 19:18:16 +0000 (0:00:00.227) 0:00:06.785 ********* 2025-08-29 19:18:17.347310 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:18:17.347321 | orchestrator | 2025-08-29 19:18:17.347331 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:17.347342 | orchestrator | Friday 29 August 2025 19:18:16 +0000 (0:00:00.212) 0:00:06.997 ********* 2025-08-29 19:18:17.347353 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:18:17.347363 | orchestrator | 2025-08-29 19:18:17.347374 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:17.347385 | orchestrator | Friday 29 August 2025 19:18:16 +0000 (0:00:00.236) 0:00:07.234 ********* 2025-08-29 19:18:17.347396 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:18:17.347406 | orchestrator | 2025-08-29 19:18:17.347424 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:17.347435 | orchestrator | Friday 29 August 2025 19:18:16 +0000 (0:00:00.220) 0:00:07.455 ********* 2025-08-29 19:18:17.347445 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:18:17.347456 | orchestrator | 2025-08-29 19:18:17.347467 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:17.347477 | orchestrator | Friday 29 August 2025 19:18:16 +0000 (0:00:00.206) 0:00:07.661 ********* 2025-08-29 19:18:17.347488 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:18:17.347499 | orchestrator | 2025-08-29 19:18:17.347509 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:17.347520 | orchestrator | Friday 29 August 2025 19:18:17 +0000 (0:00:00.209) 0:00:07.871 ********* 2025-08-29 19:18:17.347537 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:18:25.033459 | orchestrator | 2025-08-29 19:18:25.033575 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:25.033593 | orchestrator | Friday 29 August 2025 19:18:17 +0000 (0:00:00.222) 0:00:08.093 ********* 2025-08-29 19:18:25.033605 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-08-29 19:18:25.033618 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-08-29 19:18:25.033629 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-08-29 19:18:25.033641 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-08-29 19:18:25.033652 | orchestrator | 2025-08-29 19:18:25.033663 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:25.033674 | orchestrator | Friday 29 August 2025 19:18:18 +0000 (0:00:00.995) 0:00:09.088 ********* 2025-08-29 19:18:25.033685 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:18:25.033757 | orchestrator | 2025-08-29 19:18:25.033768 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:25.033779 | orchestrator | Friday 29 August 2025 19:18:18 +0000 (0:00:00.202) 0:00:09.290 ********* 2025-08-29 19:18:25.033790 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:18:25.033801 | orchestrator | 2025-08-29 19:18:25.033811 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:25.033822 | orchestrator | Friday 29 August 2025 19:18:18 +0000 (0:00:00.213) 0:00:09.504 ********* 2025-08-29 19:18:25.033833 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:18:25.033844 | orchestrator | 2025-08-29 19:18:25.033854 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:25.033865 | orchestrator | Friday 29 August 2025 19:18:18 +0000 (0:00:00.230) 0:00:09.735 ********* 2025-08-29 19:18:25.033876 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:18:25.033887 | orchestrator | 2025-08-29 19:18:25.033897 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-08-29 19:18:25.033908 | orchestrator | Friday 29 August 2025 19:18:19 +0000 (0:00:00.204) 0:00:09.940 ********* 2025-08-29 19:18:25.033919 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-08-29 19:18:25.033930 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-08-29 19:18:25.033941 | orchestrator | 2025-08-29 19:18:25.033951 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-08-29 19:18:25.033962 | orchestrator | Friday 29 August 2025 19:18:19 +0000 (0:00:00.172) 0:00:10.113 ********* 2025-08-29 19:18:25.033992 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:18:25.034006 | orchestrator | 2025-08-29 19:18:25.034073 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-08-29 19:18:25.034087 | orchestrator | Friday 29 August 2025 19:18:19 +0000 (0:00:00.142) 0:00:10.255 ********* 2025-08-29 19:18:25.034100 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:18:25.034112 | orchestrator | 2025-08-29 19:18:25.034125 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-08-29 19:18:25.034138 | orchestrator | Friday 29 August 2025 19:18:19 +0000 (0:00:00.136) 0:00:10.391 ********* 2025-08-29 19:18:25.034150 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:18:25.034186 | orchestrator | 2025-08-29 19:18:25.034199 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-08-29 19:18:25.034212 | orchestrator | Friday 29 August 2025 19:18:19 +0000 (0:00:00.140) 0:00:10.531 ********* 2025-08-29 19:18:25.034224 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:18:25.034238 | orchestrator | 2025-08-29 19:18:25.034250 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-08-29 19:18:25.034263 | orchestrator | Friday 29 August 2025 19:18:19 +0000 (0:00:00.147) 0:00:10.678 ********* 2025-08-29 19:18:25.034276 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '159b9ed4-8d08-5970-86a8-bd63a32380d6'}}) 2025-08-29 19:18:25.034289 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '338f76e1-8833-5be4-9943-9980bb5050e8'}}) 2025-08-29 19:18:25.034301 | orchestrator | 2025-08-29 19:18:25.034315 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-08-29 19:18:25.034334 | orchestrator | Friday 29 August 2025 19:18:20 +0000 (0:00:00.171) 0:00:10.850 ********* 2025-08-29 19:18:25.034355 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '159b9ed4-8d08-5970-86a8-bd63a32380d6'}})  2025-08-29 19:18:25.034383 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '338f76e1-8833-5be4-9943-9980bb5050e8'}})  2025-08-29 19:18:25.034403 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:18:25.034423 | orchestrator | 2025-08-29 19:18:25.034442 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-08-29 19:18:25.034455 | orchestrator | Friday 29 August 2025 19:18:20 +0000 (0:00:00.150) 0:00:11.001 ********* 2025-08-29 19:18:25.034466 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '159b9ed4-8d08-5970-86a8-bd63a32380d6'}})  2025-08-29 19:18:25.034477 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '338f76e1-8833-5be4-9943-9980bb5050e8'}})  2025-08-29 19:18:25.034488 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:18:25.034499 | orchestrator | 2025-08-29 19:18:25.034510 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-08-29 19:18:25.034521 | orchestrator | Friday 29 August 2025 19:18:20 +0000 (0:00:00.339) 0:00:11.341 ********* 2025-08-29 19:18:25.034532 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '159b9ed4-8d08-5970-86a8-bd63a32380d6'}})  2025-08-29 19:18:25.034542 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '338f76e1-8833-5be4-9943-9980bb5050e8'}})  2025-08-29 19:18:25.034553 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:18:25.034564 | orchestrator | 2025-08-29 19:18:25.034593 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-08-29 19:18:25.034605 | orchestrator | Friday 29 August 2025 19:18:20 +0000 (0:00:00.148) 0:00:11.489 ********* 2025-08-29 19:18:25.034616 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:18:25.034627 | orchestrator | 2025-08-29 19:18:25.034645 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-08-29 19:18:25.034656 | orchestrator | Friday 29 August 2025 19:18:20 +0000 (0:00:00.152) 0:00:11.642 ********* 2025-08-29 19:18:25.034667 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:18:25.034677 | orchestrator | 2025-08-29 19:18:25.034712 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-08-29 19:18:25.034725 | orchestrator | Friday 29 August 2025 19:18:21 +0000 (0:00:00.158) 0:00:11.800 ********* 2025-08-29 19:18:25.034736 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:18:25.034746 | orchestrator | 2025-08-29 19:18:25.034757 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-08-29 19:18:25.034768 | orchestrator | Friday 29 August 2025 19:18:21 +0000 (0:00:00.184) 0:00:11.984 ********* 2025-08-29 19:18:25.034779 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:18:25.034789 | orchestrator | 2025-08-29 19:18:25.034810 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-08-29 19:18:25.034821 | orchestrator | Friday 29 August 2025 19:18:21 +0000 (0:00:00.167) 0:00:12.152 ********* 2025-08-29 19:18:25.034831 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:18:25.034842 | orchestrator | 2025-08-29 19:18:25.034853 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-08-29 19:18:25.034863 | orchestrator | Friday 29 August 2025 19:18:21 +0000 (0:00:00.141) 0:00:12.294 ********* 2025-08-29 19:18:25.034874 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 19:18:25.034885 | orchestrator |  "ceph_osd_devices": { 2025-08-29 19:18:25.034896 | orchestrator |  "sdb": { 2025-08-29 19:18:25.034907 | orchestrator |  "osd_lvm_uuid": "159b9ed4-8d08-5970-86a8-bd63a32380d6" 2025-08-29 19:18:25.034918 | orchestrator |  }, 2025-08-29 19:18:25.034929 | orchestrator |  "sdc": { 2025-08-29 19:18:25.034940 | orchestrator |  "osd_lvm_uuid": "338f76e1-8833-5be4-9943-9980bb5050e8" 2025-08-29 19:18:25.034950 | orchestrator |  } 2025-08-29 19:18:25.034961 | orchestrator |  } 2025-08-29 19:18:25.034972 | orchestrator | } 2025-08-29 19:18:25.034983 | orchestrator | 2025-08-29 19:18:25.034994 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-08-29 19:18:25.035004 | orchestrator | Friday 29 August 2025 19:18:21 +0000 (0:00:00.147) 0:00:12.441 ********* 2025-08-29 19:18:25.035015 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:18:25.035025 | orchestrator | 2025-08-29 19:18:25.035036 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-08-29 19:18:25.035047 | orchestrator | Friday 29 August 2025 19:18:21 +0000 (0:00:00.131) 0:00:12.572 ********* 2025-08-29 19:18:25.035057 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:18:25.035068 | orchestrator | 2025-08-29 19:18:25.035078 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-08-29 19:18:25.035089 | orchestrator | Friday 29 August 2025 19:18:21 +0000 (0:00:00.136) 0:00:12.709 ********* 2025-08-29 19:18:25.035100 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:18:25.035110 | orchestrator | 2025-08-29 19:18:25.035121 | orchestrator | TASK [Print configuration data] ************************************************ 2025-08-29 19:18:25.035132 | orchestrator | Friday 29 August 2025 19:18:22 +0000 (0:00:00.156) 0:00:12.866 ********* 2025-08-29 19:18:25.035142 | orchestrator | changed: [testbed-node-3] => { 2025-08-29 19:18:25.035153 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-08-29 19:18:25.035164 | orchestrator |  "ceph_osd_devices": { 2025-08-29 19:18:25.035175 | orchestrator |  "sdb": { 2025-08-29 19:18:25.035185 | orchestrator |  "osd_lvm_uuid": "159b9ed4-8d08-5970-86a8-bd63a32380d6" 2025-08-29 19:18:25.035196 | orchestrator |  }, 2025-08-29 19:18:25.035207 | orchestrator |  "sdc": { 2025-08-29 19:18:25.035218 | orchestrator |  "osd_lvm_uuid": "338f76e1-8833-5be4-9943-9980bb5050e8" 2025-08-29 19:18:25.035228 | orchestrator |  } 2025-08-29 19:18:25.035239 | orchestrator |  }, 2025-08-29 19:18:25.035250 | orchestrator |  "lvm_volumes": [ 2025-08-29 19:18:25.035260 | orchestrator |  { 2025-08-29 19:18:25.035271 | orchestrator |  "data": "osd-block-159b9ed4-8d08-5970-86a8-bd63a32380d6", 2025-08-29 19:18:25.035282 | orchestrator |  "data_vg": "ceph-159b9ed4-8d08-5970-86a8-bd63a32380d6" 2025-08-29 19:18:25.035292 | orchestrator |  }, 2025-08-29 19:18:25.035303 | orchestrator |  { 2025-08-29 19:18:25.035314 | orchestrator |  "data": "osd-block-338f76e1-8833-5be4-9943-9980bb5050e8", 2025-08-29 19:18:25.035324 | orchestrator |  "data_vg": "ceph-338f76e1-8833-5be4-9943-9980bb5050e8" 2025-08-29 19:18:25.035335 | orchestrator |  } 2025-08-29 19:18:25.035346 | orchestrator |  ] 2025-08-29 19:18:25.035356 | orchestrator |  } 2025-08-29 19:18:25.035367 | orchestrator | } 2025-08-29 19:18:25.035381 | orchestrator | 2025-08-29 19:18:25.035407 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-08-29 19:18:25.035439 | orchestrator | Friday 29 August 2025 19:18:22 +0000 (0:00:00.222) 0:00:13.089 ********* 2025-08-29 19:18:25.035453 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 19:18:25.035465 | orchestrator | 2025-08-29 19:18:25.035475 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-08-29 19:18:25.035486 | orchestrator | 2025-08-29 19:18:25.035497 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 19:18:25.035507 | orchestrator | Friday 29 August 2025 19:18:24 +0000 (0:00:02.203) 0:00:15.292 ********* 2025-08-29 19:18:25.035518 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-08-29 19:18:25.035529 | orchestrator | 2025-08-29 19:18:25.035540 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 19:18:25.035550 | orchestrator | Friday 29 August 2025 19:18:24 +0000 (0:00:00.257) 0:00:15.550 ********* 2025-08-29 19:18:25.035561 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:18:25.035572 | orchestrator | 2025-08-29 19:18:25.035583 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:25.035601 | orchestrator | Friday 29 August 2025 19:18:25 +0000 (0:00:00.228) 0:00:15.779 ********* 2025-08-29 19:18:32.984405 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-08-29 19:18:32.984517 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-08-29 19:18:32.984532 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-08-29 19:18:32.984544 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-08-29 19:18:32.984556 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-08-29 19:18:32.984567 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-08-29 19:18:32.984578 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-08-29 19:18:32.984589 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-08-29 19:18:32.984599 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-08-29 19:18:32.984611 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-08-29 19:18:32.984621 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-08-29 19:18:32.984632 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-08-29 19:18:32.984643 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-08-29 19:18:32.984659 | orchestrator | 2025-08-29 19:18:32.984671 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:32.984724 | orchestrator | Friday 29 August 2025 19:18:25 +0000 (0:00:00.384) 0:00:16.163 ********* 2025-08-29 19:18:32.984736 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:18:32.984748 | orchestrator | 2025-08-29 19:18:32.984760 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:32.984771 | orchestrator | Friday 29 August 2025 19:18:25 +0000 (0:00:00.205) 0:00:16.368 ********* 2025-08-29 19:18:32.984781 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:18:32.984792 | orchestrator | 2025-08-29 19:18:32.984803 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:32.984814 | orchestrator | Friday 29 August 2025 19:18:25 +0000 (0:00:00.201) 0:00:16.570 ********* 2025-08-29 19:18:32.984825 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:18:32.984836 | orchestrator | 2025-08-29 19:18:32.984847 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:32.984858 | orchestrator | Friday 29 August 2025 19:18:26 +0000 (0:00:00.206) 0:00:16.776 ********* 2025-08-29 19:18:32.984869 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:18:32.984904 | orchestrator | 2025-08-29 19:18:32.984915 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:32.984926 | orchestrator | Friday 29 August 2025 19:18:26 +0000 (0:00:00.197) 0:00:16.974 ********* 2025-08-29 19:18:32.984937 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:18:32.984948 | orchestrator | 2025-08-29 19:18:32.984958 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:32.984969 | orchestrator | Friday 29 August 2025 19:18:26 +0000 (0:00:00.650) 0:00:17.624 ********* 2025-08-29 19:18:32.984980 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:18:32.984990 | orchestrator | 2025-08-29 19:18:32.985001 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:32.985012 | orchestrator | Friday 29 August 2025 19:18:27 +0000 (0:00:00.191) 0:00:17.815 ********* 2025-08-29 19:18:32.985040 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:18:32.985051 | orchestrator | 2025-08-29 19:18:32.985062 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:32.985073 | orchestrator | Friday 29 August 2025 19:18:27 +0000 (0:00:00.218) 0:00:18.034 ********* 2025-08-29 19:18:32.985083 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:18:32.985094 | orchestrator | 2025-08-29 19:18:32.985105 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:32.985116 | orchestrator | Friday 29 August 2025 19:18:27 +0000 (0:00:00.196) 0:00:18.230 ********* 2025-08-29 19:18:32.985127 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd) 2025-08-29 19:18:32.985139 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd) 2025-08-29 19:18:32.985150 | orchestrator | 2025-08-29 19:18:32.985161 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:32.985172 | orchestrator | Friday 29 August 2025 19:18:27 +0000 (0:00:00.514) 0:00:18.744 ********* 2025-08-29 19:18:32.985183 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5f9ac8f7-ded0-451e-9523-765e677fc5e6) 2025-08-29 19:18:32.985194 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5f9ac8f7-ded0-451e-9523-765e677fc5e6) 2025-08-29 19:18:32.985204 | orchestrator | 2025-08-29 19:18:32.985215 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:32.985226 | orchestrator | Friday 29 August 2025 19:18:28 +0000 (0:00:00.415) 0:00:19.159 ********* 2025-08-29 19:18:32.985237 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b185a2fd-fb6c-4818-b874-6a265721bd32) 2025-08-29 19:18:32.985247 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b185a2fd-fb6c-4818-b874-6a265721bd32) 2025-08-29 19:18:32.985258 | orchestrator | 2025-08-29 19:18:32.985269 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:32.985280 | orchestrator | Friday 29 August 2025 19:18:28 +0000 (0:00:00.475) 0:00:19.634 ********* 2025-08-29 19:18:32.985308 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a237b6df-fa80-49e2-8f79-019305f27c2d) 2025-08-29 19:18:32.985321 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a237b6df-fa80-49e2-8f79-019305f27c2d) 2025-08-29 19:18:32.985332 | orchestrator | 2025-08-29 19:18:32.985343 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:32.985354 | orchestrator | Friday 29 August 2025 19:18:29 +0000 (0:00:00.483) 0:00:20.118 ********* 2025-08-29 19:18:32.985365 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 19:18:32.985376 | orchestrator | 2025-08-29 19:18:32.985387 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:32.985398 | orchestrator | Friday 29 August 2025 19:18:29 +0000 (0:00:00.337) 0:00:20.456 ********* 2025-08-29 19:18:32.985409 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-08-29 19:18:32.985428 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-08-29 19:18:32.985439 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-08-29 19:18:32.985449 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-08-29 19:18:32.985460 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-08-29 19:18:32.985471 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-08-29 19:18:32.985482 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-08-29 19:18:32.985492 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-08-29 19:18:32.985503 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-08-29 19:18:32.985514 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-08-29 19:18:32.985524 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-08-29 19:18:32.985535 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-08-29 19:18:32.985546 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-08-29 19:18:32.985556 | orchestrator | 2025-08-29 19:18:32.985567 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:32.985578 | orchestrator | Friday 29 August 2025 19:18:30 +0000 (0:00:00.376) 0:00:20.832 ********* 2025-08-29 19:18:32.985589 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:18:32.985600 | orchestrator | 2025-08-29 19:18:32.985611 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:32.985621 | orchestrator | Friday 29 August 2025 19:18:30 +0000 (0:00:00.191) 0:00:21.023 ********* 2025-08-29 19:18:32.985632 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:18:32.985643 | orchestrator | 2025-08-29 19:18:32.985659 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:32.985670 | orchestrator | Friday 29 August 2025 19:18:30 +0000 (0:00:00.691) 0:00:21.714 ********* 2025-08-29 19:18:32.985703 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:18:32.985714 | orchestrator | 2025-08-29 19:18:32.985725 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:32.985736 | orchestrator | Friday 29 August 2025 19:18:31 +0000 (0:00:00.204) 0:00:21.919 ********* 2025-08-29 19:18:32.985747 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:18:32.985757 | orchestrator | 2025-08-29 19:18:32.985768 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:32.985779 | orchestrator | Friday 29 August 2025 19:18:31 +0000 (0:00:00.184) 0:00:22.103 ********* 2025-08-29 19:18:32.985790 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:18:32.985801 | orchestrator | 2025-08-29 19:18:32.985811 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:32.985822 | orchestrator | Friday 29 August 2025 19:18:31 +0000 (0:00:00.190) 0:00:22.294 ********* 2025-08-29 19:18:32.985833 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:18:32.985844 | orchestrator | 2025-08-29 19:18:32.985854 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:32.985865 | orchestrator | Friday 29 August 2025 19:18:31 +0000 (0:00:00.183) 0:00:22.478 ********* 2025-08-29 19:18:32.985876 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:18:32.985887 | orchestrator | 2025-08-29 19:18:32.985897 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:32.985908 | orchestrator | Friday 29 August 2025 19:18:31 +0000 (0:00:00.187) 0:00:22.665 ********* 2025-08-29 19:18:32.985919 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:18:32.985930 | orchestrator | 2025-08-29 19:18:32.985941 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:32.985958 | orchestrator | Friday 29 August 2025 19:18:32 +0000 (0:00:00.179) 0:00:22.845 ********* 2025-08-29 19:18:32.985969 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-08-29 19:18:32.985980 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-08-29 19:18:32.985991 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-08-29 19:18:32.986002 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-08-29 19:18:32.986013 | orchestrator | 2025-08-29 19:18:32.986080 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:32.986091 | orchestrator | Friday 29 August 2025 19:18:32 +0000 (0:00:00.691) 0:00:23.537 ********* 2025-08-29 19:18:32.986102 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:18:32.986113 | orchestrator | 2025-08-29 19:18:32.986130 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:38.579958 | orchestrator | Friday 29 August 2025 19:18:32 +0000 (0:00:00.196) 0:00:23.733 ********* 2025-08-29 19:18:38.580062 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:18:38.580078 | orchestrator | 2025-08-29 19:18:38.580090 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:38.580101 | orchestrator | Friday 29 August 2025 19:18:33 +0000 (0:00:00.191) 0:00:23.925 ********* 2025-08-29 19:18:38.580111 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:18:38.580121 | orchestrator | 2025-08-29 19:18:38.580132 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:38.580142 | orchestrator | Friday 29 August 2025 19:18:33 +0000 (0:00:00.182) 0:00:24.108 ********* 2025-08-29 19:18:38.580152 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:18:38.580162 | orchestrator | 2025-08-29 19:18:38.580172 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-08-29 19:18:38.580181 | orchestrator | Friday 29 August 2025 19:18:33 +0000 (0:00:00.189) 0:00:24.298 ********* 2025-08-29 19:18:38.580192 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-08-29 19:18:38.580202 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-08-29 19:18:38.580212 | orchestrator | 2025-08-29 19:18:38.580222 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-08-29 19:18:38.580232 | orchestrator | Friday 29 August 2025 19:18:33 +0000 (0:00:00.250) 0:00:24.549 ********* 2025-08-29 19:18:38.580241 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:18:38.580251 | orchestrator | 2025-08-29 19:18:38.580261 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-08-29 19:18:38.580272 | orchestrator | Friday 29 August 2025 19:18:33 +0000 (0:00:00.096) 0:00:24.646 ********* 2025-08-29 19:18:38.580282 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:18:38.580291 | orchestrator | 2025-08-29 19:18:38.580301 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-08-29 19:18:38.580311 | orchestrator | Friday 29 August 2025 19:18:33 +0000 (0:00:00.094) 0:00:24.740 ********* 2025-08-29 19:18:38.580321 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:18:38.580331 | orchestrator | 2025-08-29 19:18:38.580341 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-08-29 19:18:38.580350 | orchestrator | Friday 29 August 2025 19:18:34 +0000 (0:00:00.092) 0:00:24.833 ********* 2025-08-29 19:18:38.580361 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:18:38.580371 | orchestrator | 2025-08-29 19:18:38.580382 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-08-29 19:18:38.580392 | orchestrator | Friday 29 August 2025 19:18:34 +0000 (0:00:00.092) 0:00:24.925 ********* 2025-08-29 19:18:38.580402 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f946ce78-a8de-59ba-8bf5-045c292b6708'}}) 2025-08-29 19:18:38.580413 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9d878572-29ec-5c6d-9e5c-f341c26bb0e1'}}) 2025-08-29 19:18:38.580423 | orchestrator | 2025-08-29 19:18:38.580433 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-08-29 19:18:38.580462 | orchestrator | Friday 29 August 2025 19:18:34 +0000 (0:00:00.174) 0:00:25.100 ********* 2025-08-29 19:18:38.580473 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f946ce78-a8de-59ba-8bf5-045c292b6708'}})  2025-08-29 19:18:38.580484 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9d878572-29ec-5c6d-9e5c-f341c26bb0e1'}})  2025-08-29 19:18:38.580494 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:18:38.580503 | orchestrator | 2025-08-29 19:18:38.580529 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-08-29 19:18:38.580539 | orchestrator | Friday 29 August 2025 19:18:34 +0000 (0:00:00.168) 0:00:25.268 ********* 2025-08-29 19:18:38.580549 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f946ce78-a8de-59ba-8bf5-045c292b6708'}})  2025-08-29 19:18:38.580559 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9d878572-29ec-5c6d-9e5c-f341c26bb0e1'}})  2025-08-29 19:18:38.580569 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:18:38.580579 | orchestrator | 2025-08-29 19:18:38.580588 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-08-29 19:18:38.580598 | orchestrator | Friday 29 August 2025 19:18:34 +0000 (0:00:00.136) 0:00:25.404 ********* 2025-08-29 19:18:38.580607 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f946ce78-a8de-59ba-8bf5-045c292b6708'}})  2025-08-29 19:18:38.580617 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9d878572-29ec-5c6d-9e5c-f341c26bb0e1'}})  2025-08-29 19:18:38.580628 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:18:38.580637 | orchestrator | 2025-08-29 19:18:38.580647 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-08-29 19:18:38.580657 | orchestrator | Friday 29 August 2025 19:18:34 +0000 (0:00:00.134) 0:00:25.539 ********* 2025-08-29 19:18:38.580688 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:18:38.580699 | orchestrator | 2025-08-29 19:18:38.580709 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-08-29 19:18:38.580719 | orchestrator | Friday 29 August 2025 19:18:34 +0000 (0:00:00.113) 0:00:25.652 ********* 2025-08-29 19:18:38.580728 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:18:38.580738 | orchestrator | 2025-08-29 19:18:38.580748 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-08-29 19:18:38.580758 | orchestrator | Friday 29 August 2025 19:18:35 +0000 (0:00:00.117) 0:00:25.769 ********* 2025-08-29 19:18:38.580767 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:18:38.580777 | orchestrator | 2025-08-29 19:18:38.580802 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-08-29 19:18:38.580813 | orchestrator | Friday 29 August 2025 19:18:35 +0000 (0:00:00.106) 0:00:25.876 ********* 2025-08-29 19:18:38.580822 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:18:38.580832 | orchestrator | 2025-08-29 19:18:38.580842 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-08-29 19:18:38.580851 | orchestrator | Friday 29 August 2025 19:18:35 +0000 (0:00:00.274) 0:00:26.150 ********* 2025-08-29 19:18:38.580861 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:18:38.580870 | orchestrator | 2025-08-29 19:18:38.580880 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-08-29 19:18:38.580890 | orchestrator | Friday 29 August 2025 19:18:35 +0000 (0:00:00.105) 0:00:26.255 ********* 2025-08-29 19:18:38.580899 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 19:18:38.580909 | orchestrator |  "ceph_osd_devices": { 2025-08-29 19:18:38.580919 | orchestrator |  "sdb": { 2025-08-29 19:18:38.580928 | orchestrator |  "osd_lvm_uuid": "f946ce78-a8de-59ba-8bf5-045c292b6708" 2025-08-29 19:18:38.580938 | orchestrator |  }, 2025-08-29 19:18:38.580947 | orchestrator |  "sdc": { 2025-08-29 19:18:38.580964 | orchestrator |  "osd_lvm_uuid": "9d878572-29ec-5c6d-9e5c-f341c26bb0e1" 2025-08-29 19:18:38.580974 | orchestrator |  } 2025-08-29 19:18:38.580983 | orchestrator |  } 2025-08-29 19:18:38.580993 | orchestrator | } 2025-08-29 19:18:38.581004 | orchestrator | 2025-08-29 19:18:38.581013 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-08-29 19:18:38.581023 | orchestrator | Friday 29 August 2025 19:18:35 +0000 (0:00:00.115) 0:00:26.371 ********* 2025-08-29 19:18:38.581033 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:18:38.581042 | orchestrator | 2025-08-29 19:18:38.581052 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-08-29 19:18:38.581062 | orchestrator | Friday 29 August 2025 19:18:35 +0000 (0:00:00.124) 0:00:26.496 ********* 2025-08-29 19:18:38.581071 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:18:38.581080 | orchestrator | 2025-08-29 19:18:38.581090 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-08-29 19:18:38.581100 | orchestrator | Friday 29 August 2025 19:18:35 +0000 (0:00:00.110) 0:00:26.606 ********* 2025-08-29 19:18:38.581109 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:18:38.581119 | orchestrator | 2025-08-29 19:18:38.581128 | orchestrator | TASK [Print configuration data] ************************************************ 2025-08-29 19:18:38.581138 | orchestrator | Friday 29 August 2025 19:18:35 +0000 (0:00:00.112) 0:00:26.719 ********* 2025-08-29 19:18:38.581147 | orchestrator | changed: [testbed-node-4] => { 2025-08-29 19:18:38.581157 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-08-29 19:18:38.581167 | orchestrator |  "ceph_osd_devices": { 2025-08-29 19:18:38.581176 | orchestrator |  "sdb": { 2025-08-29 19:18:38.581186 | orchestrator |  "osd_lvm_uuid": "f946ce78-a8de-59ba-8bf5-045c292b6708" 2025-08-29 19:18:38.581196 | orchestrator |  }, 2025-08-29 19:18:38.581205 | orchestrator |  "sdc": { 2025-08-29 19:18:38.581215 | orchestrator |  "osd_lvm_uuid": "9d878572-29ec-5c6d-9e5c-f341c26bb0e1" 2025-08-29 19:18:38.581225 | orchestrator |  } 2025-08-29 19:18:38.581234 | orchestrator |  }, 2025-08-29 19:18:38.581244 | orchestrator |  "lvm_volumes": [ 2025-08-29 19:18:38.581253 | orchestrator |  { 2025-08-29 19:18:38.581263 | orchestrator |  "data": "osd-block-f946ce78-a8de-59ba-8bf5-045c292b6708", 2025-08-29 19:18:38.581273 | orchestrator |  "data_vg": "ceph-f946ce78-a8de-59ba-8bf5-045c292b6708" 2025-08-29 19:18:38.581282 | orchestrator |  }, 2025-08-29 19:18:38.581292 | orchestrator |  { 2025-08-29 19:18:38.581301 | orchestrator |  "data": "osd-block-9d878572-29ec-5c6d-9e5c-f341c26bb0e1", 2025-08-29 19:18:38.581311 | orchestrator |  "data_vg": "ceph-9d878572-29ec-5c6d-9e5c-f341c26bb0e1" 2025-08-29 19:18:38.581320 | orchestrator |  } 2025-08-29 19:18:38.581330 | orchestrator |  ] 2025-08-29 19:18:38.581339 | orchestrator |  } 2025-08-29 19:18:38.581349 | orchestrator | } 2025-08-29 19:18:38.581358 | orchestrator | 2025-08-29 19:18:38.581368 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-08-29 19:18:38.581377 | orchestrator | Friday 29 August 2025 19:18:36 +0000 (0:00:00.204) 0:00:26.923 ********* 2025-08-29 19:18:38.581387 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-08-29 19:18:38.581396 | orchestrator | 2025-08-29 19:18:38.581406 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-08-29 19:18:38.581415 | orchestrator | 2025-08-29 19:18:38.581425 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 19:18:38.581434 | orchestrator | Friday 29 August 2025 19:18:37 +0000 (0:00:00.904) 0:00:27.827 ********* 2025-08-29 19:18:38.581444 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-08-29 19:18:38.581453 | orchestrator | 2025-08-29 19:18:38.581463 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 19:18:38.581472 | orchestrator | Friday 29 August 2025 19:18:37 +0000 (0:00:00.409) 0:00:28.237 ********* 2025-08-29 19:18:38.581488 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:18:38.581498 | orchestrator | 2025-08-29 19:18:38.581513 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:38.581523 | orchestrator | Friday 29 August 2025 19:18:38 +0000 (0:00:00.692) 0:00:28.929 ********* 2025-08-29 19:18:38.581533 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-08-29 19:18:38.581542 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-08-29 19:18:38.581552 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-08-29 19:18:38.581561 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-08-29 19:18:38.581571 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-08-29 19:18:38.581580 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-08-29 19:18:38.581595 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-08-29 19:18:47.074995 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-08-29 19:18:47.075073 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-08-29 19:18:47.075080 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-08-29 19:18:47.075085 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-08-29 19:18:47.075090 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-08-29 19:18:47.075095 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-08-29 19:18:47.075100 | orchestrator | 2025-08-29 19:18:47.075106 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:47.075112 | orchestrator | Friday 29 August 2025 19:18:38 +0000 (0:00:00.393) 0:00:29.323 ********* 2025-08-29 19:18:47.075117 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:18:47.075123 | orchestrator | 2025-08-29 19:18:47.075128 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:47.075133 | orchestrator | Friday 29 August 2025 19:18:38 +0000 (0:00:00.229) 0:00:29.552 ********* 2025-08-29 19:18:47.075137 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:18:47.075142 | orchestrator | 2025-08-29 19:18:47.075147 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:47.075151 | orchestrator | Friday 29 August 2025 19:18:39 +0000 (0:00:00.210) 0:00:29.763 ********* 2025-08-29 19:18:47.075156 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:18:47.075161 | orchestrator | 2025-08-29 19:18:47.075165 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:47.075170 | orchestrator | Friday 29 August 2025 19:18:39 +0000 (0:00:00.208) 0:00:29.971 ********* 2025-08-29 19:18:47.075175 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:18:47.075180 | orchestrator | 2025-08-29 19:18:47.075184 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:47.075189 | orchestrator | Friday 29 August 2025 19:18:39 +0000 (0:00:00.225) 0:00:30.196 ********* 2025-08-29 19:18:47.075194 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:18:47.075198 | orchestrator | 2025-08-29 19:18:47.075203 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:47.075208 | orchestrator | Friday 29 August 2025 19:18:39 +0000 (0:00:00.210) 0:00:30.407 ********* 2025-08-29 19:18:47.075213 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:18:47.075217 | orchestrator | 2025-08-29 19:18:47.075222 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:47.075227 | orchestrator | Friday 29 August 2025 19:18:39 +0000 (0:00:00.190) 0:00:30.598 ********* 2025-08-29 19:18:47.075232 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:18:47.075253 | orchestrator | 2025-08-29 19:18:47.075258 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:47.075263 | orchestrator | Friday 29 August 2025 19:18:40 +0000 (0:00:00.206) 0:00:30.804 ********* 2025-08-29 19:18:47.075268 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:18:47.075272 | orchestrator | 2025-08-29 19:18:47.075277 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:47.075282 | orchestrator | Friday 29 August 2025 19:18:40 +0000 (0:00:00.256) 0:00:31.061 ********* 2025-08-29 19:18:47.075287 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9) 2025-08-29 19:18:47.075292 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9) 2025-08-29 19:18:47.075297 | orchestrator | 2025-08-29 19:18:47.075302 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:47.075306 | orchestrator | Friday 29 August 2025 19:18:40 +0000 (0:00:00.665) 0:00:31.726 ********* 2025-08-29 19:18:47.075311 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2e78d78f-49e6-4011-b2d5-d2a9d6a8ed1c) 2025-08-29 19:18:47.075316 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2e78d78f-49e6-4011-b2d5-d2a9d6a8ed1c) 2025-08-29 19:18:47.075320 | orchestrator | 2025-08-29 19:18:47.075325 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:47.075330 | orchestrator | Friday 29 August 2025 19:18:41 +0000 (0:00:00.930) 0:00:32.656 ********* 2025-08-29 19:18:47.075334 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4acdbd50-4373-4301-8b9f-e7658d09fe80) 2025-08-29 19:18:47.075339 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4acdbd50-4373-4301-8b9f-e7658d09fe80) 2025-08-29 19:18:47.075344 | orchestrator | 2025-08-29 19:18:47.075348 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:47.075353 | orchestrator | Friday 29 August 2025 19:18:42 +0000 (0:00:00.465) 0:00:33.122 ********* 2025-08-29 19:18:47.075358 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_53b41fa5-6534-4646-b4f8-3662ac98ea03) 2025-08-29 19:18:47.075363 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_53b41fa5-6534-4646-b4f8-3662ac98ea03) 2025-08-29 19:18:47.075367 | orchestrator | 2025-08-29 19:18:47.075372 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:18:47.075377 | orchestrator | Friday 29 August 2025 19:18:42 +0000 (0:00:00.459) 0:00:33.582 ********* 2025-08-29 19:18:47.075381 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 19:18:47.075386 | orchestrator | 2025-08-29 19:18:47.075391 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:47.075396 | orchestrator | Friday 29 August 2025 19:18:43 +0000 (0:00:00.342) 0:00:33.924 ********* 2025-08-29 19:18:47.075413 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-08-29 19:18:47.075418 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-08-29 19:18:47.075423 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-08-29 19:18:47.075427 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-08-29 19:18:47.075432 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-08-29 19:18:47.075437 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-08-29 19:18:47.075454 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-08-29 19:18:47.075459 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-08-29 19:18:47.075464 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-08-29 19:18:47.075473 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-08-29 19:18:47.075478 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-08-29 19:18:47.075482 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-08-29 19:18:47.075487 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-08-29 19:18:47.075492 | orchestrator | 2025-08-29 19:18:47.075496 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:47.075501 | orchestrator | Friday 29 August 2025 19:18:43 +0000 (0:00:00.413) 0:00:34.338 ********* 2025-08-29 19:18:47.075506 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:18:47.075510 | orchestrator | 2025-08-29 19:18:47.075515 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:47.075520 | orchestrator | Friday 29 August 2025 19:18:43 +0000 (0:00:00.216) 0:00:34.554 ********* 2025-08-29 19:18:47.075524 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:18:47.075529 | orchestrator | 2025-08-29 19:18:47.075534 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:47.075538 | orchestrator | Friday 29 August 2025 19:18:43 +0000 (0:00:00.195) 0:00:34.750 ********* 2025-08-29 19:18:47.075544 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:18:47.075551 | orchestrator | 2025-08-29 19:18:47.075562 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:47.075573 | orchestrator | Friday 29 August 2025 19:18:44 +0000 (0:00:00.220) 0:00:34.971 ********* 2025-08-29 19:18:47.075582 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:18:47.075590 | orchestrator | 2025-08-29 19:18:47.075598 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:47.075605 | orchestrator | Friday 29 August 2025 19:18:44 +0000 (0:00:00.204) 0:00:35.175 ********* 2025-08-29 19:18:47.075612 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:18:47.075619 | orchestrator | 2025-08-29 19:18:47.075626 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:47.075633 | orchestrator | Friday 29 August 2025 19:18:44 +0000 (0:00:00.206) 0:00:35.381 ********* 2025-08-29 19:18:47.075640 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:18:47.075648 | orchestrator | 2025-08-29 19:18:47.075675 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:47.075683 | orchestrator | Friday 29 August 2025 19:18:45 +0000 (0:00:00.684) 0:00:36.066 ********* 2025-08-29 19:18:47.075691 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:18:47.075698 | orchestrator | 2025-08-29 19:18:47.075706 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:47.075714 | orchestrator | Friday 29 August 2025 19:18:45 +0000 (0:00:00.224) 0:00:36.291 ********* 2025-08-29 19:18:47.075721 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:18:47.075726 | orchestrator | 2025-08-29 19:18:47.075732 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:47.075737 | orchestrator | Friday 29 August 2025 19:18:45 +0000 (0:00:00.233) 0:00:36.525 ********* 2025-08-29 19:18:47.075742 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-08-29 19:18:47.075747 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-08-29 19:18:47.075753 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-08-29 19:18:47.075758 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-08-29 19:18:47.075763 | orchestrator | 2025-08-29 19:18:47.075768 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:47.075772 | orchestrator | Friday 29 August 2025 19:18:46 +0000 (0:00:00.681) 0:00:37.206 ********* 2025-08-29 19:18:47.075777 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:18:47.075781 | orchestrator | 2025-08-29 19:18:47.075786 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:47.075832 | orchestrator | Friday 29 August 2025 19:18:46 +0000 (0:00:00.169) 0:00:37.376 ********* 2025-08-29 19:18:47.075837 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:18:47.075841 | orchestrator | 2025-08-29 19:18:47.075846 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:47.075850 | orchestrator | Friday 29 August 2025 19:18:46 +0000 (0:00:00.163) 0:00:37.539 ********* 2025-08-29 19:18:47.075855 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:18:47.075859 | orchestrator | 2025-08-29 19:18:47.075864 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:18:47.075869 | orchestrator | Friday 29 August 2025 19:18:46 +0000 (0:00:00.151) 0:00:37.691 ********* 2025-08-29 19:18:47.075873 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:18:47.075878 | orchestrator | 2025-08-29 19:18:47.075882 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-08-29 19:18:47.075892 | orchestrator | Friday 29 August 2025 19:18:47 +0000 (0:00:00.133) 0:00:37.824 ********* 2025-08-29 19:18:50.984365 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-08-29 19:18:50.984449 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-08-29 19:18:50.984458 | orchestrator | 2025-08-29 19:18:50.984466 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-08-29 19:18:50.984473 | orchestrator | Friday 29 August 2025 19:18:47 +0000 (0:00:00.164) 0:00:37.989 ********* 2025-08-29 19:18:50.984479 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:18:50.984486 | orchestrator | 2025-08-29 19:18:50.984493 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-08-29 19:18:50.984499 | orchestrator | Friday 29 August 2025 19:18:47 +0000 (0:00:00.183) 0:00:38.173 ********* 2025-08-29 19:18:50.984505 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:18:50.984512 | orchestrator | 2025-08-29 19:18:50.984518 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-08-29 19:18:50.984524 | orchestrator | Friday 29 August 2025 19:18:47 +0000 (0:00:00.166) 0:00:38.339 ********* 2025-08-29 19:18:50.984530 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:18:50.984536 | orchestrator | 2025-08-29 19:18:50.984543 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-08-29 19:18:50.984549 | orchestrator | Friday 29 August 2025 19:18:47 +0000 (0:00:00.105) 0:00:38.445 ********* 2025-08-29 19:18:50.984555 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:18:50.984562 | orchestrator | 2025-08-29 19:18:50.984569 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-08-29 19:18:50.984575 | orchestrator | Friday 29 August 2025 19:18:47 +0000 (0:00:00.255) 0:00:38.700 ********* 2025-08-29 19:18:50.984582 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd29334ae-dac4-5c8b-9540-76ee60da5ca1'}}) 2025-08-29 19:18:50.984589 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '916dc454-8beb-55d0-b00a-22c96f7025a6'}}) 2025-08-29 19:18:50.984595 | orchestrator | 2025-08-29 19:18:50.984601 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-08-29 19:18:50.984607 | orchestrator | Friday 29 August 2025 19:18:48 +0000 (0:00:00.154) 0:00:38.855 ********* 2025-08-29 19:18:50.984614 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd29334ae-dac4-5c8b-9540-76ee60da5ca1'}})  2025-08-29 19:18:50.984622 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '916dc454-8beb-55d0-b00a-22c96f7025a6'}})  2025-08-29 19:18:50.984628 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:18:50.984634 | orchestrator | 2025-08-29 19:18:50.984641 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-08-29 19:18:50.984672 | orchestrator | Friday 29 August 2025 19:18:48 +0000 (0:00:00.129) 0:00:38.984 ********* 2025-08-29 19:18:50.984679 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd29334ae-dac4-5c8b-9540-76ee60da5ca1'}})  2025-08-29 19:18:50.984705 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '916dc454-8beb-55d0-b00a-22c96f7025a6'}})  2025-08-29 19:18:50.984712 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:18:50.984718 | orchestrator | 2025-08-29 19:18:50.984724 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-08-29 19:18:50.984731 | orchestrator | Friday 29 August 2025 19:18:48 +0000 (0:00:00.133) 0:00:39.117 ********* 2025-08-29 19:18:50.984737 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd29334ae-dac4-5c8b-9540-76ee60da5ca1'}})  2025-08-29 19:18:50.984757 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '916dc454-8beb-55d0-b00a-22c96f7025a6'}})  2025-08-29 19:18:50.984763 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:18:50.984769 | orchestrator | 2025-08-29 19:18:50.984776 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-08-29 19:18:50.984782 | orchestrator | Friday 29 August 2025 19:18:48 +0000 (0:00:00.130) 0:00:39.248 ********* 2025-08-29 19:18:50.984788 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:18:50.984794 | orchestrator | 2025-08-29 19:18:50.984801 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-08-29 19:18:50.984807 | orchestrator | Friday 29 August 2025 19:18:48 +0000 (0:00:00.136) 0:00:39.385 ********* 2025-08-29 19:18:50.984813 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:18:50.984819 | orchestrator | 2025-08-29 19:18:50.984825 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-08-29 19:18:50.984831 | orchestrator | Friday 29 August 2025 19:18:48 +0000 (0:00:00.150) 0:00:39.536 ********* 2025-08-29 19:18:50.984838 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:18:50.984844 | orchestrator | 2025-08-29 19:18:50.984850 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-08-29 19:18:50.984856 | orchestrator | Friday 29 August 2025 19:18:48 +0000 (0:00:00.144) 0:00:39.681 ********* 2025-08-29 19:18:50.984862 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:18:50.984868 | orchestrator | 2025-08-29 19:18:50.984874 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-08-29 19:18:50.984880 | orchestrator | Friday 29 August 2025 19:18:49 +0000 (0:00:00.124) 0:00:39.805 ********* 2025-08-29 19:18:50.984887 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:18:50.984893 | orchestrator | 2025-08-29 19:18:50.984899 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-08-29 19:18:50.984905 | orchestrator | Friday 29 August 2025 19:18:49 +0000 (0:00:00.141) 0:00:39.947 ********* 2025-08-29 19:18:50.984911 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 19:18:50.984917 | orchestrator |  "ceph_osd_devices": { 2025-08-29 19:18:50.984924 | orchestrator |  "sdb": { 2025-08-29 19:18:50.984931 | orchestrator |  "osd_lvm_uuid": "d29334ae-dac4-5c8b-9540-76ee60da5ca1" 2025-08-29 19:18:50.984949 | orchestrator |  }, 2025-08-29 19:18:50.984956 | orchestrator |  "sdc": { 2025-08-29 19:18:50.984964 | orchestrator |  "osd_lvm_uuid": "916dc454-8beb-55d0-b00a-22c96f7025a6" 2025-08-29 19:18:50.984971 | orchestrator |  } 2025-08-29 19:18:50.984978 | orchestrator |  } 2025-08-29 19:18:50.984985 | orchestrator | } 2025-08-29 19:18:50.984993 | orchestrator | 2025-08-29 19:18:50.985000 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-08-29 19:18:50.985007 | orchestrator | Friday 29 August 2025 19:18:49 +0000 (0:00:00.114) 0:00:40.061 ********* 2025-08-29 19:18:50.985014 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:18:50.985021 | orchestrator | 2025-08-29 19:18:50.985028 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-08-29 19:18:50.985035 | orchestrator | Friday 29 August 2025 19:18:49 +0000 (0:00:00.107) 0:00:40.168 ********* 2025-08-29 19:18:50.985042 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:18:50.985049 | orchestrator | 2025-08-29 19:18:50.985056 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-08-29 19:18:50.985068 | orchestrator | Friday 29 August 2025 19:18:49 +0000 (0:00:00.348) 0:00:40.517 ********* 2025-08-29 19:18:50.985074 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:18:50.985081 | orchestrator | 2025-08-29 19:18:50.985088 | orchestrator | TASK [Print configuration data] ************************************************ 2025-08-29 19:18:50.985095 | orchestrator | Friday 29 August 2025 19:18:49 +0000 (0:00:00.123) 0:00:40.640 ********* 2025-08-29 19:18:50.985103 | orchestrator | changed: [testbed-node-5] => { 2025-08-29 19:18:50.985110 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-08-29 19:18:50.985117 | orchestrator |  "ceph_osd_devices": { 2025-08-29 19:18:50.985124 | orchestrator |  "sdb": { 2025-08-29 19:18:50.985132 | orchestrator |  "osd_lvm_uuid": "d29334ae-dac4-5c8b-9540-76ee60da5ca1" 2025-08-29 19:18:50.985139 | orchestrator |  }, 2025-08-29 19:18:50.985146 | orchestrator |  "sdc": { 2025-08-29 19:18:50.985153 | orchestrator |  "osd_lvm_uuid": "916dc454-8beb-55d0-b00a-22c96f7025a6" 2025-08-29 19:18:50.985160 | orchestrator |  } 2025-08-29 19:18:50.985167 | orchestrator |  }, 2025-08-29 19:18:50.985174 | orchestrator |  "lvm_volumes": [ 2025-08-29 19:18:50.985181 | orchestrator |  { 2025-08-29 19:18:50.985188 | orchestrator |  "data": "osd-block-d29334ae-dac4-5c8b-9540-76ee60da5ca1", 2025-08-29 19:18:50.985195 | orchestrator |  "data_vg": "ceph-d29334ae-dac4-5c8b-9540-76ee60da5ca1" 2025-08-29 19:18:50.985202 | orchestrator |  }, 2025-08-29 19:18:50.985209 | orchestrator |  { 2025-08-29 19:18:50.985216 | orchestrator |  "data": "osd-block-916dc454-8beb-55d0-b00a-22c96f7025a6", 2025-08-29 19:18:50.985223 | orchestrator |  "data_vg": "ceph-916dc454-8beb-55d0-b00a-22c96f7025a6" 2025-08-29 19:18:50.985230 | orchestrator |  } 2025-08-29 19:18:50.985237 | orchestrator |  ] 2025-08-29 19:18:50.985244 | orchestrator |  } 2025-08-29 19:18:50.985254 | orchestrator | } 2025-08-29 19:18:50.985261 | orchestrator | 2025-08-29 19:18:50.985268 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-08-29 19:18:50.985275 | orchestrator | Friday 29 August 2025 19:18:50 +0000 (0:00:00.207) 0:00:40.848 ********* 2025-08-29 19:18:50.985282 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-08-29 19:18:50.985289 | orchestrator | 2025-08-29 19:18:50.985296 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:18:50.985304 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 19:18:50.985312 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 19:18:50.985318 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 19:18:50.985324 | orchestrator | 2025-08-29 19:18:50.985330 | orchestrator | 2025-08-29 19:18:50.985336 | orchestrator | 2025-08-29 19:18:50.985343 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:18:50.985349 | orchestrator | Friday 29 August 2025 19:18:50 +0000 (0:00:00.870) 0:00:41.718 ********* 2025-08-29 19:18:50.985355 | orchestrator | =============================================================================== 2025-08-29 19:18:50.985361 | orchestrator | Write configuration file ------------------------------------------------ 3.98s 2025-08-29 19:18:50.985367 | orchestrator | Add known partitions to the list of available block devices ------------- 1.17s 2025-08-29 19:18:50.985373 | orchestrator | Get initial list of available block devices ----------------------------- 1.17s 2025-08-29 19:18:50.985379 | orchestrator | Add known links to the list of available block devices ------------------ 1.13s 2025-08-29 19:18:50.985385 | orchestrator | Add known partitions to the list of available block devices ------------- 1.00s 2025-08-29 19:18:50.985396 | orchestrator | Add known links to the list of available block devices ------------------ 0.93s 2025-08-29 19:18:50.985402 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.91s 2025-08-29 19:18:50.985408 | orchestrator | Add known links to the list of available block devices ------------------ 0.76s 2025-08-29 19:18:50.985414 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2025-08-29 19:18:50.985420 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2025-08-29 19:18:50.985426 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2025-08-29 19:18:50.985432 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2025-08-29 19:18:50.985438 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2025-08-29 19:18:50.985445 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2025-08-29 19:18:50.985455 | orchestrator | Print configuration data ------------------------------------------------ 0.63s 2025-08-29 19:18:51.203103 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2025-08-29 19:18:51.203211 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.61s 2025-08-29 19:18:51.203227 | orchestrator | Add known links to the list of available block devices ------------------ 0.60s 2025-08-29 19:18:51.203240 | orchestrator | Print DB devices -------------------------------------------------------- 0.60s 2025-08-29 19:18:51.203251 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.59s 2025-08-29 19:19:13.727999 | orchestrator | 2025-08-29 19:19:13 | INFO  | Task 777f4623-251c-41b0-8697-b2d52b815035 (sync inventory) is running in background. Output coming soon. 2025-08-29 19:19:40.264861 | orchestrator | 2025-08-29 19:19:14 | INFO  | Starting group_vars file reorganization 2025-08-29 19:19:40.264964 | orchestrator | 2025-08-29 19:19:14 | INFO  | Moved 0 file(s) to their respective directories 2025-08-29 19:19:40.264977 | orchestrator | 2025-08-29 19:19:14 | INFO  | Group_vars file reorganization completed 2025-08-29 19:19:40.264986 | orchestrator | 2025-08-29 19:19:17 | INFO  | Starting variable preparation from inventory 2025-08-29 19:19:40.264995 | orchestrator | 2025-08-29 19:19:21 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-08-29 19:19:40.265004 | orchestrator | 2025-08-29 19:19:21 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-08-29 19:19:40.265012 | orchestrator | 2025-08-29 19:19:21 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-08-29 19:19:40.265040 | orchestrator | 2025-08-29 19:19:21 | INFO  | 3 file(s) written, 6 host(s) processed 2025-08-29 19:19:40.265048 | orchestrator | 2025-08-29 19:19:21 | INFO  | Variable preparation completed 2025-08-29 19:19:40.265057 | orchestrator | 2025-08-29 19:19:22 | INFO  | Starting inventory overwrite handling 2025-08-29 19:19:40.265065 | orchestrator | 2025-08-29 19:19:22 | INFO  | Handling group overwrites in 99-overwrite 2025-08-29 19:19:40.265077 | orchestrator | 2025-08-29 19:19:22 | INFO  | Removing group frr:children from 60-generic 2025-08-29 19:19:40.265085 | orchestrator | 2025-08-29 19:19:22 | INFO  | Removing group storage:children from 50-kolla 2025-08-29 19:19:40.265093 | orchestrator | 2025-08-29 19:19:22 | INFO  | Removing group netbird:children from 50-infrastruture 2025-08-29 19:19:40.265101 | orchestrator | 2025-08-29 19:19:22 | INFO  | Removing group ceph-rgw from 50-ceph 2025-08-29 19:19:40.265110 | orchestrator | 2025-08-29 19:19:22 | INFO  | Removing group ceph-mds from 50-ceph 2025-08-29 19:19:40.265118 | orchestrator | 2025-08-29 19:19:22 | INFO  | Handling group overwrites in 20-roles 2025-08-29 19:19:40.265126 | orchestrator | 2025-08-29 19:19:22 | INFO  | Removing group k3s_node from 50-infrastruture 2025-08-29 19:19:40.265154 | orchestrator | 2025-08-29 19:19:22 | INFO  | Removed 6 group(s) in total 2025-08-29 19:19:40.265162 | orchestrator | 2025-08-29 19:19:22 | INFO  | Inventory overwrite handling completed 2025-08-29 19:19:40.265170 | orchestrator | 2025-08-29 19:19:23 | INFO  | Starting merge of inventory files 2025-08-29 19:19:40.265178 | orchestrator | 2025-08-29 19:19:23 | INFO  | Inventory files merged successfully 2025-08-29 19:19:40.265186 | orchestrator | 2025-08-29 19:19:28 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-08-29 19:19:40.265194 | orchestrator | 2025-08-29 19:19:39 | INFO  | Successfully wrote ClusterShell configuration 2025-08-29 19:19:40.265202 | orchestrator | [master be7c00c] 2025-08-29-19-19 2025-08-29 19:19:40.265211 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-08-29 19:19:42.526427 | orchestrator | 2025-08-29 19:19:42 | INFO  | Task 47217dcf-d4bf-427a-a86e-fd5afde0bdd2 (ceph-create-lvm-devices) was prepared for execution. 2025-08-29 19:19:42.526545 | orchestrator | 2025-08-29 19:19:42 | INFO  | It takes a moment until task 47217dcf-d4bf-427a-a86e-fd5afde0bdd2 (ceph-create-lvm-devices) has been started and output is visible here. 2025-08-29 19:19:54.697720 | orchestrator | 2025-08-29 19:19:54.697834 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-08-29 19:19:54.697851 | orchestrator | 2025-08-29 19:19:54.697864 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 19:19:54.697876 | orchestrator | Friday 29 August 2025 19:19:46 +0000 (0:00:00.314) 0:00:00.314 ********* 2025-08-29 19:19:54.697887 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 19:19:54.697898 | orchestrator | 2025-08-29 19:19:54.697909 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 19:19:54.697920 | orchestrator | Friday 29 August 2025 19:19:46 +0000 (0:00:00.243) 0:00:00.558 ********* 2025-08-29 19:19:54.697931 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:19:54.697943 | orchestrator | 2025-08-29 19:19:54.697954 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:19:54.697964 | orchestrator | Friday 29 August 2025 19:19:47 +0000 (0:00:00.234) 0:00:00.792 ********* 2025-08-29 19:19:54.697975 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-08-29 19:19:54.697988 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-08-29 19:19:54.697999 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-08-29 19:19:54.698010 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-08-29 19:19:54.698080 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-08-29 19:19:54.698092 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-08-29 19:19:54.698103 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-08-29 19:19:54.698113 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-08-29 19:19:54.698124 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-08-29 19:19:54.698135 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-08-29 19:19:54.698146 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-08-29 19:19:54.698157 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-08-29 19:19:54.698167 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-08-29 19:19:54.698178 | orchestrator | 2025-08-29 19:19:54.698189 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:19:54.698219 | orchestrator | Friday 29 August 2025 19:19:47 +0000 (0:00:00.407) 0:00:01.200 ********* 2025-08-29 19:19:54.698231 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:19:54.698242 | orchestrator | 2025-08-29 19:19:54.698252 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:19:54.698263 | orchestrator | Friday 29 August 2025 19:19:48 +0000 (0:00:00.479) 0:00:01.680 ********* 2025-08-29 19:19:54.698277 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:19:54.698288 | orchestrator | 2025-08-29 19:19:54.698300 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:19:54.698313 | orchestrator | Friday 29 August 2025 19:19:48 +0000 (0:00:00.215) 0:00:01.895 ********* 2025-08-29 19:19:54.698325 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:19:54.698337 | orchestrator | 2025-08-29 19:19:54.698349 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:19:54.698362 | orchestrator | Friday 29 August 2025 19:19:48 +0000 (0:00:00.229) 0:00:02.125 ********* 2025-08-29 19:19:54.698374 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:19:54.698385 | orchestrator | 2025-08-29 19:19:54.698397 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:19:54.698410 | orchestrator | Friday 29 August 2025 19:19:48 +0000 (0:00:00.205) 0:00:02.331 ********* 2025-08-29 19:19:54.698422 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:19:54.698434 | orchestrator | 2025-08-29 19:19:54.698446 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:19:54.698458 | orchestrator | Friday 29 August 2025 19:19:48 +0000 (0:00:00.226) 0:00:02.557 ********* 2025-08-29 19:19:54.698470 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:19:54.698483 | orchestrator | 2025-08-29 19:19:54.698495 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:19:54.698507 | orchestrator | Friday 29 August 2025 19:19:49 +0000 (0:00:00.188) 0:00:02.746 ********* 2025-08-29 19:19:54.698519 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:19:54.698530 | orchestrator | 2025-08-29 19:19:54.698540 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:19:54.698551 | orchestrator | Friday 29 August 2025 19:19:49 +0000 (0:00:00.193) 0:00:02.939 ********* 2025-08-29 19:19:54.698582 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:19:54.698592 | orchestrator | 2025-08-29 19:19:54.698603 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:19:54.698614 | orchestrator | Friday 29 August 2025 19:19:49 +0000 (0:00:00.212) 0:00:03.152 ********* 2025-08-29 19:19:54.698625 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920) 2025-08-29 19:19:54.698637 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920) 2025-08-29 19:19:54.698647 | orchestrator | 2025-08-29 19:19:54.698658 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:19:54.698669 | orchestrator | Friday 29 August 2025 19:19:50 +0000 (0:00:00.439) 0:00:03.591 ********* 2025-08-29 19:19:54.698697 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0a4c4485-d2ea-4599-9435-e606068873fe) 2025-08-29 19:19:54.698709 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0a4c4485-d2ea-4599-9435-e606068873fe) 2025-08-29 19:19:54.698720 | orchestrator | 2025-08-29 19:19:54.698731 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:19:54.698742 | orchestrator | Friday 29 August 2025 19:19:50 +0000 (0:00:00.420) 0:00:04.012 ********* 2025-08-29 19:19:54.698752 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_21104e56-4cdf-49d9-91fd-13aff314e467) 2025-08-29 19:19:54.698763 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_21104e56-4cdf-49d9-91fd-13aff314e467) 2025-08-29 19:19:54.698774 | orchestrator | 2025-08-29 19:19:54.698784 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:19:54.698801 | orchestrator | Friday 29 August 2025 19:19:51 +0000 (0:00:00.678) 0:00:04.690 ********* 2025-08-29 19:19:54.698812 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e5bba166-d17c-451d-864c-9f74c60a90a3) 2025-08-29 19:19:54.698823 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e5bba166-d17c-451d-864c-9f74c60a90a3) 2025-08-29 19:19:54.698833 | orchestrator | 2025-08-29 19:19:54.698844 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:19:54.698854 | orchestrator | Friday 29 August 2025 19:19:52 +0000 (0:00:00.904) 0:00:05.595 ********* 2025-08-29 19:19:54.698865 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 19:19:54.698876 | orchestrator | 2025-08-29 19:19:54.698886 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:19:54.698897 | orchestrator | Friday 29 August 2025 19:19:52 +0000 (0:00:00.386) 0:00:05.981 ********* 2025-08-29 19:19:54.698907 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-08-29 19:19:54.698918 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-08-29 19:19:54.698928 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-08-29 19:19:54.698939 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-08-29 19:19:54.698961 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-08-29 19:19:54.698972 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-08-29 19:19:54.698982 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-08-29 19:19:54.698993 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-08-29 19:19:54.699003 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-08-29 19:19:54.699014 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-08-29 19:19:54.699024 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-08-29 19:19:54.699035 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-08-29 19:19:54.699050 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-08-29 19:19:54.699061 | orchestrator | 2025-08-29 19:19:54.699071 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:19:54.699088 | orchestrator | Friday 29 August 2025 19:19:52 +0000 (0:00:00.464) 0:00:06.445 ********* 2025-08-29 19:19:54.699106 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:19:54.699125 | orchestrator | 2025-08-29 19:19:54.699156 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:19:54.699174 | orchestrator | Friday 29 August 2025 19:19:53 +0000 (0:00:00.218) 0:00:06.663 ********* 2025-08-29 19:19:54.699192 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:19:54.699210 | orchestrator | 2025-08-29 19:19:54.699227 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:19:54.699244 | orchestrator | Friday 29 August 2025 19:19:53 +0000 (0:00:00.248) 0:00:06.911 ********* 2025-08-29 19:19:54.699261 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:19:54.699280 | orchestrator | 2025-08-29 19:19:54.699299 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:19:54.699318 | orchestrator | Friday 29 August 2025 19:19:53 +0000 (0:00:00.277) 0:00:07.188 ********* 2025-08-29 19:19:54.699336 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:19:54.699354 | orchestrator | 2025-08-29 19:19:54.699366 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:19:54.699386 | orchestrator | Friday 29 August 2025 19:19:53 +0000 (0:00:00.200) 0:00:07.389 ********* 2025-08-29 19:19:54.699398 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:19:54.699408 | orchestrator | 2025-08-29 19:19:54.699419 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:19:54.699430 | orchestrator | Friday 29 August 2025 19:19:54 +0000 (0:00:00.247) 0:00:07.637 ********* 2025-08-29 19:19:54.699441 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:19:54.699451 | orchestrator | 2025-08-29 19:19:54.699462 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:19:54.699473 | orchestrator | Friday 29 August 2025 19:19:54 +0000 (0:00:00.198) 0:00:07.835 ********* 2025-08-29 19:19:54.699484 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:19:54.699494 | orchestrator | 2025-08-29 19:19:54.699505 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:19:54.699516 | orchestrator | Friday 29 August 2025 19:19:54 +0000 (0:00:00.229) 0:00:08.064 ********* 2025-08-29 19:19:54.699537 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:02.791065 | orchestrator | 2025-08-29 19:20:02.791188 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:20:02.791205 | orchestrator | Friday 29 August 2025 19:19:54 +0000 (0:00:00.196) 0:00:08.260 ********* 2025-08-29 19:20:02.791216 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-08-29 19:20:02.791227 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-08-29 19:20:02.791238 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-08-29 19:20:02.791248 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-08-29 19:20:02.791258 | orchestrator | 2025-08-29 19:20:02.791268 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:20:02.791278 | orchestrator | Friday 29 August 2025 19:19:55 +0000 (0:00:01.291) 0:00:09.552 ********* 2025-08-29 19:20:02.791288 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:02.791298 | orchestrator | 2025-08-29 19:20:02.791308 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:20:02.791317 | orchestrator | Friday 29 August 2025 19:19:56 +0000 (0:00:00.256) 0:00:09.808 ********* 2025-08-29 19:20:02.791327 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:02.791337 | orchestrator | 2025-08-29 19:20:02.791346 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:20:02.791356 | orchestrator | Friday 29 August 2025 19:19:56 +0000 (0:00:00.175) 0:00:09.984 ********* 2025-08-29 19:20:02.791366 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:02.791376 | orchestrator | 2025-08-29 19:20:02.791386 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:20:02.791396 | orchestrator | Friday 29 August 2025 19:19:56 +0000 (0:00:00.251) 0:00:10.235 ********* 2025-08-29 19:20:02.791405 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:02.791415 | orchestrator | 2025-08-29 19:20:02.791425 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-08-29 19:20:02.791434 | orchestrator | Friday 29 August 2025 19:19:56 +0000 (0:00:00.182) 0:00:10.418 ********* 2025-08-29 19:20:02.791444 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:02.791453 | orchestrator | 2025-08-29 19:20:02.791463 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-08-29 19:20:02.791473 | orchestrator | Friday 29 August 2025 19:19:56 +0000 (0:00:00.115) 0:00:10.533 ********* 2025-08-29 19:20:02.791483 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '159b9ed4-8d08-5970-86a8-bd63a32380d6'}}) 2025-08-29 19:20:02.791493 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '338f76e1-8833-5be4-9943-9980bb5050e8'}}) 2025-08-29 19:20:02.791502 | orchestrator | 2025-08-29 19:20:02.791512 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-08-29 19:20:02.791522 | orchestrator | Friday 29 August 2025 19:19:57 +0000 (0:00:00.153) 0:00:10.687 ********* 2025-08-29 19:20:02.791533 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-159b9ed4-8d08-5970-86a8-bd63a32380d6', 'data_vg': 'ceph-159b9ed4-8d08-5970-86a8-bd63a32380d6'}) 2025-08-29 19:20:02.791594 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-338f76e1-8833-5be4-9943-9980bb5050e8', 'data_vg': 'ceph-338f76e1-8833-5be4-9943-9980bb5050e8'}) 2025-08-29 19:20:02.791608 | orchestrator | 2025-08-29 19:20:02.791619 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-08-29 19:20:02.791631 | orchestrator | Friday 29 August 2025 19:19:59 +0000 (0:00:01.947) 0:00:12.634 ********* 2025-08-29 19:20:02.791643 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-159b9ed4-8d08-5970-86a8-bd63a32380d6', 'data_vg': 'ceph-159b9ed4-8d08-5970-86a8-bd63a32380d6'})  2025-08-29 19:20:02.791655 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-338f76e1-8833-5be4-9943-9980bb5050e8', 'data_vg': 'ceph-338f76e1-8833-5be4-9943-9980bb5050e8'})  2025-08-29 19:20:02.791666 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:02.791677 | orchestrator | 2025-08-29 19:20:02.791688 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-08-29 19:20:02.791699 | orchestrator | Friday 29 August 2025 19:19:59 +0000 (0:00:00.139) 0:00:12.773 ********* 2025-08-29 19:20:02.791711 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-159b9ed4-8d08-5970-86a8-bd63a32380d6', 'data_vg': 'ceph-159b9ed4-8d08-5970-86a8-bd63a32380d6'}) 2025-08-29 19:20:02.791722 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-338f76e1-8833-5be4-9943-9980bb5050e8', 'data_vg': 'ceph-338f76e1-8833-5be4-9943-9980bb5050e8'}) 2025-08-29 19:20:02.791733 | orchestrator | 2025-08-29 19:20:02.791744 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-08-29 19:20:02.791755 | orchestrator | Friday 29 August 2025 19:20:00 +0000 (0:00:01.378) 0:00:14.152 ********* 2025-08-29 19:20:02.791765 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-159b9ed4-8d08-5970-86a8-bd63a32380d6', 'data_vg': 'ceph-159b9ed4-8d08-5970-86a8-bd63a32380d6'})  2025-08-29 19:20:02.791777 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-338f76e1-8833-5be4-9943-9980bb5050e8', 'data_vg': 'ceph-338f76e1-8833-5be4-9943-9980bb5050e8'})  2025-08-29 19:20:02.791788 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:02.791799 | orchestrator | 2025-08-29 19:20:02.791811 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-08-29 19:20:02.791822 | orchestrator | Friday 29 August 2025 19:20:00 +0000 (0:00:00.153) 0:00:14.305 ********* 2025-08-29 19:20:02.791833 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:02.791845 | orchestrator | 2025-08-29 19:20:02.791854 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-08-29 19:20:02.791882 | orchestrator | Friday 29 August 2025 19:20:00 +0000 (0:00:00.143) 0:00:14.449 ********* 2025-08-29 19:20:02.791893 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-159b9ed4-8d08-5970-86a8-bd63a32380d6', 'data_vg': 'ceph-159b9ed4-8d08-5970-86a8-bd63a32380d6'})  2025-08-29 19:20:02.791903 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-338f76e1-8833-5be4-9943-9980bb5050e8', 'data_vg': 'ceph-338f76e1-8833-5be4-9943-9980bb5050e8'})  2025-08-29 19:20:02.791912 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:02.791922 | orchestrator | 2025-08-29 19:20:02.791931 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-08-29 19:20:02.791941 | orchestrator | Friday 29 August 2025 19:20:01 +0000 (0:00:00.388) 0:00:14.837 ********* 2025-08-29 19:20:02.791950 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:02.791960 | orchestrator | 2025-08-29 19:20:02.791970 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-08-29 19:20:02.791979 | orchestrator | Friday 29 August 2025 19:20:01 +0000 (0:00:00.158) 0:00:14.996 ********* 2025-08-29 19:20:02.791989 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-159b9ed4-8d08-5970-86a8-bd63a32380d6', 'data_vg': 'ceph-159b9ed4-8d08-5970-86a8-bd63a32380d6'})  2025-08-29 19:20:02.792007 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-338f76e1-8833-5be4-9943-9980bb5050e8', 'data_vg': 'ceph-338f76e1-8833-5be4-9943-9980bb5050e8'})  2025-08-29 19:20:02.792017 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:02.792026 | orchestrator | 2025-08-29 19:20:02.792036 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-08-29 19:20:02.792045 | orchestrator | Friday 29 August 2025 19:20:01 +0000 (0:00:00.158) 0:00:15.155 ********* 2025-08-29 19:20:02.792055 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:02.792064 | orchestrator | 2025-08-29 19:20:02.792074 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-08-29 19:20:02.792083 | orchestrator | Friday 29 August 2025 19:20:01 +0000 (0:00:00.142) 0:00:15.297 ********* 2025-08-29 19:20:02.792093 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-159b9ed4-8d08-5970-86a8-bd63a32380d6', 'data_vg': 'ceph-159b9ed4-8d08-5970-86a8-bd63a32380d6'})  2025-08-29 19:20:02.792102 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-338f76e1-8833-5be4-9943-9980bb5050e8', 'data_vg': 'ceph-338f76e1-8833-5be4-9943-9980bb5050e8'})  2025-08-29 19:20:02.792112 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:02.792121 | orchestrator | 2025-08-29 19:20:02.792131 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-08-29 19:20:02.792140 | orchestrator | Friday 29 August 2025 19:20:01 +0000 (0:00:00.187) 0:00:15.485 ********* 2025-08-29 19:20:02.792150 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:20:02.792160 | orchestrator | 2025-08-29 19:20:02.792169 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-08-29 19:20:02.792179 | orchestrator | Friday 29 August 2025 19:20:02 +0000 (0:00:00.140) 0:00:15.626 ********* 2025-08-29 19:20:02.792210 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-159b9ed4-8d08-5970-86a8-bd63a32380d6', 'data_vg': 'ceph-159b9ed4-8d08-5970-86a8-bd63a32380d6'})  2025-08-29 19:20:02.792220 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-338f76e1-8833-5be4-9943-9980bb5050e8', 'data_vg': 'ceph-338f76e1-8833-5be4-9943-9980bb5050e8'})  2025-08-29 19:20:02.792230 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:02.792239 | orchestrator | 2025-08-29 19:20:02.792249 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-08-29 19:20:02.792258 | orchestrator | Friday 29 August 2025 19:20:02 +0000 (0:00:00.146) 0:00:15.773 ********* 2025-08-29 19:20:02.792268 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-159b9ed4-8d08-5970-86a8-bd63a32380d6', 'data_vg': 'ceph-159b9ed4-8d08-5970-86a8-bd63a32380d6'})  2025-08-29 19:20:02.792278 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-338f76e1-8833-5be4-9943-9980bb5050e8', 'data_vg': 'ceph-338f76e1-8833-5be4-9943-9980bb5050e8'})  2025-08-29 19:20:02.792287 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:02.792296 | orchestrator | 2025-08-29 19:20:02.792306 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-08-29 19:20:02.792316 | orchestrator | Friday 29 August 2025 19:20:02 +0000 (0:00:00.149) 0:00:15.922 ********* 2025-08-29 19:20:02.792325 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-159b9ed4-8d08-5970-86a8-bd63a32380d6', 'data_vg': 'ceph-159b9ed4-8d08-5970-86a8-bd63a32380d6'})  2025-08-29 19:20:02.792335 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-338f76e1-8833-5be4-9943-9980bb5050e8', 'data_vg': 'ceph-338f76e1-8833-5be4-9943-9980bb5050e8'})  2025-08-29 19:20:02.792344 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:02.792354 | orchestrator | 2025-08-29 19:20:02.792363 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-08-29 19:20:02.792373 | orchestrator | Friday 29 August 2025 19:20:02 +0000 (0:00:00.153) 0:00:16.075 ********* 2025-08-29 19:20:02.792382 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:02.792398 | orchestrator | 2025-08-29 19:20:02.792408 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-08-29 19:20:02.792418 | orchestrator | Friday 29 August 2025 19:20:02 +0000 (0:00:00.148) 0:00:16.223 ********* 2025-08-29 19:20:02.792427 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:02.792437 | orchestrator | 2025-08-29 19:20:02.792451 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-08-29 19:20:09.279459 | orchestrator | Friday 29 August 2025 19:20:02 +0000 (0:00:00.131) 0:00:16.354 ********* 2025-08-29 19:20:09.279530 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:09.279565 | orchestrator | 2025-08-29 19:20:09.279572 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-08-29 19:20:09.279579 | orchestrator | Friday 29 August 2025 19:20:02 +0000 (0:00:00.135) 0:00:16.490 ********* 2025-08-29 19:20:09.279586 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 19:20:09.279593 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-08-29 19:20:09.279599 | orchestrator | } 2025-08-29 19:20:09.279606 | orchestrator | 2025-08-29 19:20:09.279613 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-08-29 19:20:09.279619 | orchestrator | Friday 29 August 2025 19:20:03 +0000 (0:00:00.362) 0:00:16.853 ********* 2025-08-29 19:20:09.279625 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 19:20:09.279632 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-08-29 19:20:09.279638 | orchestrator | } 2025-08-29 19:20:09.279644 | orchestrator | 2025-08-29 19:20:09.279651 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-08-29 19:20:09.279657 | orchestrator | Friday 29 August 2025 19:20:03 +0000 (0:00:00.167) 0:00:17.021 ********* 2025-08-29 19:20:09.279663 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 19:20:09.279670 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-08-29 19:20:09.279676 | orchestrator | } 2025-08-29 19:20:09.279683 | orchestrator | 2025-08-29 19:20:09.279690 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-08-29 19:20:09.279696 | orchestrator | Friday 29 August 2025 19:20:03 +0000 (0:00:00.183) 0:00:17.204 ********* 2025-08-29 19:20:09.279702 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:20:09.279709 | orchestrator | 2025-08-29 19:20:09.279715 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-08-29 19:20:09.279721 | orchestrator | Friday 29 August 2025 19:20:04 +0000 (0:00:00.678) 0:00:17.883 ********* 2025-08-29 19:20:09.279727 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:20:09.279734 | orchestrator | 2025-08-29 19:20:09.279740 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-08-29 19:20:09.279746 | orchestrator | Friday 29 August 2025 19:20:04 +0000 (0:00:00.510) 0:00:18.393 ********* 2025-08-29 19:20:09.279761 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:20:09.279773 | orchestrator | 2025-08-29 19:20:09.279779 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-08-29 19:20:09.279786 | orchestrator | Friday 29 August 2025 19:20:05 +0000 (0:00:00.475) 0:00:18.869 ********* 2025-08-29 19:20:09.279792 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:20:09.279798 | orchestrator | 2025-08-29 19:20:09.279804 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-08-29 19:20:09.279811 | orchestrator | Friday 29 August 2025 19:20:05 +0000 (0:00:00.162) 0:00:19.031 ********* 2025-08-29 19:20:09.279817 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:09.279823 | orchestrator | 2025-08-29 19:20:09.279830 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-08-29 19:20:09.279836 | orchestrator | Friday 29 August 2025 19:20:05 +0000 (0:00:00.112) 0:00:19.144 ********* 2025-08-29 19:20:09.279842 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:09.279849 | orchestrator | 2025-08-29 19:20:09.279855 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-08-29 19:20:09.279861 | orchestrator | Friday 29 August 2025 19:20:05 +0000 (0:00:00.126) 0:00:19.271 ********* 2025-08-29 19:20:09.279868 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 19:20:09.279889 | orchestrator |  "vgs_report": { 2025-08-29 19:20:09.279905 | orchestrator |  "vg": [] 2025-08-29 19:20:09.279912 | orchestrator |  } 2025-08-29 19:20:09.279918 | orchestrator | } 2025-08-29 19:20:09.279924 | orchestrator | 2025-08-29 19:20:09.279930 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-08-29 19:20:09.279936 | orchestrator | Friday 29 August 2025 19:20:05 +0000 (0:00:00.140) 0:00:19.412 ********* 2025-08-29 19:20:09.279943 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:09.279949 | orchestrator | 2025-08-29 19:20:09.279955 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-08-29 19:20:09.279962 | orchestrator | Friday 29 August 2025 19:20:05 +0000 (0:00:00.129) 0:00:19.541 ********* 2025-08-29 19:20:09.279968 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:09.279974 | orchestrator | 2025-08-29 19:20:09.279980 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-08-29 19:20:09.279986 | orchestrator | Friday 29 August 2025 19:20:06 +0000 (0:00:00.142) 0:00:19.683 ********* 2025-08-29 19:20:09.279993 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:09.279999 | orchestrator | 2025-08-29 19:20:09.280005 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-08-29 19:20:09.280011 | orchestrator | Friday 29 August 2025 19:20:06 +0000 (0:00:00.362) 0:00:20.046 ********* 2025-08-29 19:20:09.280017 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:09.280024 | orchestrator | 2025-08-29 19:20:09.280030 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-08-29 19:20:09.280036 | orchestrator | Friday 29 August 2025 19:20:06 +0000 (0:00:00.144) 0:00:20.191 ********* 2025-08-29 19:20:09.280043 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:09.280050 | orchestrator | 2025-08-29 19:20:09.280058 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-08-29 19:20:09.280065 | orchestrator | Friday 29 August 2025 19:20:06 +0000 (0:00:00.146) 0:00:20.337 ********* 2025-08-29 19:20:09.280072 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:09.280079 | orchestrator | 2025-08-29 19:20:09.280086 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-08-29 19:20:09.280093 | orchestrator | Friday 29 August 2025 19:20:06 +0000 (0:00:00.126) 0:00:20.464 ********* 2025-08-29 19:20:09.280100 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:09.280107 | orchestrator | 2025-08-29 19:20:09.280115 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-08-29 19:20:09.280121 | orchestrator | Friday 29 August 2025 19:20:07 +0000 (0:00:00.128) 0:00:20.592 ********* 2025-08-29 19:20:09.280129 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:09.280136 | orchestrator | 2025-08-29 19:20:09.280143 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-08-29 19:20:09.280160 | orchestrator | Friday 29 August 2025 19:20:07 +0000 (0:00:00.122) 0:00:20.715 ********* 2025-08-29 19:20:09.280167 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:09.280174 | orchestrator | 2025-08-29 19:20:09.280181 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-08-29 19:20:09.280188 | orchestrator | Friday 29 August 2025 19:20:07 +0000 (0:00:00.126) 0:00:20.841 ********* 2025-08-29 19:20:09.280195 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:09.280201 | orchestrator | 2025-08-29 19:20:09.280209 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-08-29 19:20:09.280216 | orchestrator | Friday 29 August 2025 19:20:07 +0000 (0:00:00.124) 0:00:20.966 ********* 2025-08-29 19:20:09.280223 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:09.280229 | orchestrator | 2025-08-29 19:20:09.280237 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-08-29 19:20:09.280243 | orchestrator | Friday 29 August 2025 19:20:07 +0000 (0:00:00.138) 0:00:21.104 ********* 2025-08-29 19:20:09.280251 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:09.280257 | orchestrator | 2025-08-29 19:20:09.280270 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-08-29 19:20:09.280277 | orchestrator | Friday 29 August 2025 19:20:07 +0000 (0:00:00.143) 0:00:21.248 ********* 2025-08-29 19:20:09.280284 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:09.280291 | orchestrator | 2025-08-29 19:20:09.280297 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-08-29 19:20:09.280303 | orchestrator | Friday 29 August 2025 19:20:07 +0000 (0:00:00.145) 0:00:21.393 ********* 2025-08-29 19:20:09.280309 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:09.280316 | orchestrator | 2025-08-29 19:20:09.280322 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-08-29 19:20:09.280328 | orchestrator | Friday 29 August 2025 19:20:07 +0000 (0:00:00.140) 0:00:21.534 ********* 2025-08-29 19:20:09.280334 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-159b9ed4-8d08-5970-86a8-bd63a32380d6', 'data_vg': 'ceph-159b9ed4-8d08-5970-86a8-bd63a32380d6'})  2025-08-29 19:20:09.280341 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-338f76e1-8833-5be4-9943-9980bb5050e8', 'data_vg': 'ceph-338f76e1-8833-5be4-9943-9980bb5050e8'})  2025-08-29 19:20:09.280348 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:09.280354 | orchestrator | 2025-08-29 19:20:09.280360 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-08-29 19:20:09.280366 | orchestrator | Friday 29 August 2025 19:20:08 +0000 (0:00:00.403) 0:00:21.937 ********* 2025-08-29 19:20:09.280372 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-159b9ed4-8d08-5970-86a8-bd63a32380d6', 'data_vg': 'ceph-159b9ed4-8d08-5970-86a8-bd63a32380d6'})  2025-08-29 19:20:09.280379 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-338f76e1-8833-5be4-9943-9980bb5050e8', 'data_vg': 'ceph-338f76e1-8833-5be4-9943-9980bb5050e8'})  2025-08-29 19:20:09.280385 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:09.280391 | orchestrator | 2025-08-29 19:20:09.280397 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-08-29 19:20:09.280403 | orchestrator | Friday 29 August 2025 19:20:08 +0000 (0:00:00.210) 0:00:22.147 ********* 2025-08-29 19:20:09.280410 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-159b9ed4-8d08-5970-86a8-bd63a32380d6', 'data_vg': 'ceph-159b9ed4-8d08-5970-86a8-bd63a32380d6'})  2025-08-29 19:20:09.280416 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-338f76e1-8833-5be4-9943-9980bb5050e8', 'data_vg': 'ceph-338f76e1-8833-5be4-9943-9980bb5050e8'})  2025-08-29 19:20:09.280422 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:09.280428 | orchestrator | 2025-08-29 19:20:09.280434 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-08-29 19:20:09.280441 | orchestrator | Friday 29 August 2025 19:20:08 +0000 (0:00:00.196) 0:00:22.344 ********* 2025-08-29 19:20:09.280447 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-159b9ed4-8d08-5970-86a8-bd63a32380d6', 'data_vg': 'ceph-159b9ed4-8d08-5970-86a8-bd63a32380d6'})  2025-08-29 19:20:09.280453 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-338f76e1-8833-5be4-9943-9980bb5050e8', 'data_vg': 'ceph-338f76e1-8833-5be4-9943-9980bb5050e8'})  2025-08-29 19:20:09.280459 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:09.280465 | orchestrator | 2025-08-29 19:20:09.280471 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-08-29 19:20:09.280477 | orchestrator | Friday 29 August 2025 19:20:08 +0000 (0:00:00.155) 0:00:22.500 ********* 2025-08-29 19:20:09.280484 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-159b9ed4-8d08-5970-86a8-bd63a32380d6', 'data_vg': 'ceph-159b9ed4-8d08-5970-86a8-bd63a32380d6'})  2025-08-29 19:20:09.280490 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-338f76e1-8833-5be4-9943-9980bb5050e8', 'data_vg': 'ceph-338f76e1-8833-5be4-9943-9980bb5050e8'})  2025-08-29 19:20:09.280496 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:09.280506 | orchestrator | 2025-08-29 19:20:09.280513 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-08-29 19:20:09.280519 | orchestrator | Friday 29 August 2025 19:20:09 +0000 (0:00:00.157) 0:00:22.657 ********* 2025-08-29 19:20:09.280534 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-159b9ed4-8d08-5970-86a8-bd63a32380d6', 'data_vg': 'ceph-159b9ed4-8d08-5970-86a8-bd63a32380d6'})  2025-08-29 19:20:09.280556 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-338f76e1-8833-5be4-9943-9980bb5050e8', 'data_vg': 'ceph-338f76e1-8833-5be4-9943-9980bb5050e8'})  2025-08-29 19:20:14.130297 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:14.130378 | orchestrator | 2025-08-29 19:20:14.130384 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-08-29 19:20:14.130390 | orchestrator | Friday 29 August 2025 19:20:09 +0000 (0:00:00.187) 0:00:22.844 ********* 2025-08-29 19:20:14.130395 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-159b9ed4-8d08-5970-86a8-bd63a32380d6', 'data_vg': 'ceph-159b9ed4-8d08-5970-86a8-bd63a32380d6'})  2025-08-29 19:20:14.130401 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-338f76e1-8833-5be4-9943-9980bb5050e8', 'data_vg': 'ceph-338f76e1-8833-5be4-9943-9980bb5050e8'})  2025-08-29 19:20:14.130405 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:14.130409 | orchestrator | 2025-08-29 19:20:14.130413 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-08-29 19:20:14.130417 | orchestrator | Friday 29 August 2025 19:20:09 +0000 (0:00:00.163) 0:00:23.008 ********* 2025-08-29 19:20:14.130421 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-159b9ed4-8d08-5970-86a8-bd63a32380d6', 'data_vg': 'ceph-159b9ed4-8d08-5970-86a8-bd63a32380d6'})  2025-08-29 19:20:14.130425 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-338f76e1-8833-5be4-9943-9980bb5050e8', 'data_vg': 'ceph-338f76e1-8833-5be4-9943-9980bb5050e8'})  2025-08-29 19:20:14.130429 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:14.130433 | orchestrator | 2025-08-29 19:20:14.130436 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-08-29 19:20:14.130440 | orchestrator | Friday 29 August 2025 19:20:09 +0000 (0:00:00.145) 0:00:23.153 ********* 2025-08-29 19:20:14.130444 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:20:14.130449 | orchestrator | 2025-08-29 19:20:14.130453 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-08-29 19:20:14.130457 | orchestrator | Friday 29 August 2025 19:20:10 +0000 (0:00:00.458) 0:00:23.612 ********* 2025-08-29 19:20:14.130461 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:20:14.130466 | orchestrator | 2025-08-29 19:20:14.130470 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-08-29 19:20:14.130474 | orchestrator | Friday 29 August 2025 19:20:10 +0000 (0:00:00.476) 0:00:24.088 ********* 2025-08-29 19:20:14.130479 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:20:14.130483 | orchestrator | 2025-08-29 19:20:14.130488 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-08-29 19:20:14.130492 | orchestrator | Friday 29 August 2025 19:20:10 +0000 (0:00:00.147) 0:00:24.235 ********* 2025-08-29 19:20:14.130497 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-159b9ed4-8d08-5970-86a8-bd63a32380d6', 'vg_name': 'ceph-159b9ed4-8d08-5970-86a8-bd63a32380d6'}) 2025-08-29 19:20:14.130502 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-338f76e1-8833-5be4-9943-9980bb5050e8', 'vg_name': 'ceph-338f76e1-8833-5be4-9943-9980bb5050e8'}) 2025-08-29 19:20:14.130507 | orchestrator | 2025-08-29 19:20:14.130523 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-08-29 19:20:14.130527 | orchestrator | Friday 29 August 2025 19:20:10 +0000 (0:00:00.154) 0:00:24.390 ********* 2025-08-29 19:20:14.130573 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-159b9ed4-8d08-5970-86a8-bd63a32380d6', 'data_vg': 'ceph-159b9ed4-8d08-5970-86a8-bd63a32380d6'})  2025-08-29 19:20:14.130593 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-338f76e1-8833-5be4-9943-9980bb5050e8', 'data_vg': 'ceph-338f76e1-8833-5be4-9943-9980bb5050e8'})  2025-08-29 19:20:14.130597 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:14.130602 | orchestrator | 2025-08-29 19:20:14.130606 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-08-29 19:20:14.130611 | orchestrator | Friday 29 August 2025 19:20:11 +0000 (0:00:00.296) 0:00:24.686 ********* 2025-08-29 19:20:14.130615 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-159b9ed4-8d08-5970-86a8-bd63a32380d6', 'data_vg': 'ceph-159b9ed4-8d08-5970-86a8-bd63a32380d6'})  2025-08-29 19:20:14.130619 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-338f76e1-8833-5be4-9943-9980bb5050e8', 'data_vg': 'ceph-338f76e1-8833-5be4-9943-9980bb5050e8'})  2025-08-29 19:20:14.130624 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:14.130628 | orchestrator | 2025-08-29 19:20:14.130632 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-08-29 19:20:14.130637 | orchestrator | Friday 29 August 2025 19:20:11 +0000 (0:00:00.166) 0:00:24.852 ********* 2025-08-29 19:20:14.130641 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-159b9ed4-8d08-5970-86a8-bd63a32380d6', 'data_vg': 'ceph-159b9ed4-8d08-5970-86a8-bd63a32380d6'})  2025-08-29 19:20:14.130646 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-338f76e1-8833-5be4-9943-9980bb5050e8', 'data_vg': 'ceph-338f76e1-8833-5be4-9943-9980bb5050e8'})  2025-08-29 19:20:14.130650 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:20:14.130654 | orchestrator | 2025-08-29 19:20:14.130659 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-08-29 19:20:14.130663 | orchestrator | Friday 29 August 2025 19:20:11 +0000 (0:00:00.131) 0:00:24.984 ********* 2025-08-29 19:20:14.130667 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 19:20:14.130672 | orchestrator |  "lvm_report": { 2025-08-29 19:20:14.130676 | orchestrator |  "lv": [ 2025-08-29 19:20:14.130681 | orchestrator |  { 2025-08-29 19:20:14.130696 | orchestrator |  "lv_name": "osd-block-159b9ed4-8d08-5970-86a8-bd63a32380d6", 2025-08-29 19:20:14.130702 | orchestrator |  "vg_name": "ceph-159b9ed4-8d08-5970-86a8-bd63a32380d6" 2025-08-29 19:20:14.130706 | orchestrator |  }, 2025-08-29 19:20:14.130710 | orchestrator |  { 2025-08-29 19:20:14.130715 | orchestrator |  "lv_name": "osd-block-338f76e1-8833-5be4-9943-9980bb5050e8", 2025-08-29 19:20:14.130719 | orchestrator |  "vg_name": "ceph-338f76e1-8833-5be4-9943-9980bb5050e8" 2025-08-29 19:20:14.130723 | orchestrator |  } 2025-08-29 19:20:14.130728 | orchestrator |  ], 2025-08-29 19:20:14.130732 | orchestrator |  "pv": [ 2025-08-29 19:20:14.130736 | orchestrator |  { 2025-08-29 19:20:14.130741 | orchestrator |  "pv_name": "/dev/sdb", 2025-08-29 19:20:14.130745 | orchestrator |  "vg_name": "ceph-159b9ed4-8d08-5970-86a8-bd63a32380d6" 2025-08-29 19:20:14.130749 | orchestrator |  }, 2025-08-29 19:20:14.130754 | orchestrator |  { 2025-08-29 19:20:14.130758 | orchestrator |  "pv_name": "/dev/sdc", 2025-08-29 19:20:14.130762 | orchestrator |  "vg_name": "ceph-338f76e1-8833-5be4-9943-9980bb5050e8" 2025-08-29 19:20:14.130767 | orchestrator |  } 2025-08-29 19:20:14.130771 | orchestrator |  ] 2025-08-29 19:20:14.130775 | orchestrator |  } 2025-08-29 19:20:14.130779 | orchestrator | } 2025-08-29 19:20:14.130784 | orchestrator | 2025-08-29 19:20:14.130788 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-08-29 19:20:14.130793 | orchestrator | 2025-08-29 19:20:14.130797 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 19:20:14.130801 | orchestrator | Friday 29 August 2025 19:20:11 +0000 (0:00:00.257) 0:00:25.242 ********* 2025-08-29 19:20:14.130806 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-08-29 19:20:14.130816 | orchestrator | 2025-08-29 19:20:14.130820 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 19:20:14.130825 | orchestrator | Friday 29 August 2025 19:20:11 +0000 (0:00:00.223) 0:00:25.465 ********* 2025-08-29 19:20:14.130829 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:20:14.130833 | orchestrator | 2025-08-29 19:20:14.130838 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:20:14.130842 | orchestrator | Friday 29 August 2025 19:20:12 +0000 (0:00:00.234) 0:00:25.700 ********* 2025-08-29 19:20:14.130846 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-08-29 19:20:14.130851 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-08-29 19:20:14.130855 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-08-29 19:20:14.130859 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-08-29 19:20:14.130864 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-08-29 19:20:14.130868 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-08-29 19:20:14.130873 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-08-29 19:20:14.130881 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-08-29 19:20:14.130886 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-08-29 19:20:14.130891 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-08-29 19:20:14.130896 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-08-29 19:20:14.130901 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-08-29 19:20:14.130906 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-08-29 19:20:14.130910 | orchestrator | 2025-08-29 19:20:14.130915 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:20:14.130920 | orchestrator | Friday 29 August 2025 19:20:12 +0000 (0:00:00.387) 0:00:26.087 ********* 2025-08-29 19:20:14.130925 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:14.130930 | orchestrator | 2025-08-29 19:20:14.130935 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:20:14.130940 | orchestrator | Friday 29 August 2025 19:20:12 +0000 (0:00:00.174) 0:00:26.262 ********* 2025-08-29 19:20:14.130944 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:14.130949 | orchestrator | 2025-08-29 19:20:14.130954 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:20:14.130959 | orchestrator | Friday 29 August 2025 19:20:12 +0000 (0:00:00.185) 0:00:26.447 ********* 2025-08-29 19:20:14.130964 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:14.130969 | orchestrator | 2025-08-29 19:20:14.130973 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:20:14.130978 | orchestrator | Friday 29 August 2025 19:20:13 +0000 (0:00:00.445) 0:00:26.893 ********* 2025-08-29 19:20:14.130983 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:14.130988 | orchestrator | 2025-08-29 19:20:14.130992 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:20:14.130997 | orchestrator | Friday 29 August 2025 19:20:13 +0000 (0:00:00.197) 0:00:27.090 ********* 2025-08-29 19:20:14.131002 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:14.131007 | orchestrator | 2025-08-29 19:20:14.131011 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:20:14.131016 | orchestrator | Friday 29 August 2025 19:20:13 +0000 (0:00:00.189) 0:00:27.279 ********* 2025-08-29 19:20:14.131021 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:14.131026 | orchestrator | 2025-08-29 19:20:14.131034 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:20:14.131039 | orchestrator | Friday 29 August 2025 19:20:13 +0000 (0:00:00.192) 0:00:27.472 ********* 2025-08-29 19:20:14.131044 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:14.131049 | orchestrator | 2025-08-29 19:20:14.131057 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:20:24.359169 | orchestrator | Friday 29 August 2025 19:20:14 +0000 (0:00:00.221) 0:00:27.694 ********* 2025-08-29 19:20:24.359285 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:24.359311 | orchestrator | 2025-08-29 19:20:24.359331 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:20:24.359350 | orchestrator | Friday 29 August 2025 19:20:14 +0000 (0:00:00.197) 0:00:27.892 ********* 2025-08-29 19:20:24.359369 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd) 2025-08-29 19:20:24.359388 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd) 2025-08-29 19:20:24.359405 | orchestrator | 2025-08-29 19:20:24.359421 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:20:24.359439 | orchestrator | Friday 29 August 2025 19:20:14 +0000 (0:00:00.411) 0:00:28.303 ********* 2025-08-29 19:20:24.359455 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5f9ac8f7-ded0-451e-9523-765e677fc5e6) 2025-08-29 19:20:24.359472 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5f9ac8f7-ded0-451e-9523-765e677fc5e6) 2025-08-29 19:20:24.359489 | orchestrator | 2025-08-29 19:20:24.359507 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:20:24.359594 | orchestrator | Friday 29 August 2025 19:20:15 +0000 (0:00:00.435) 0:00:28.739 ********* 2025-08-29 19:20:24.359605 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b185a2fd-fb6c-4818-b874-6a265721bd32) 2025-08-29 19:20:24.359615 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b185a2fd-fb6c-4818-b874-6a265721bd32) 2025-08-29 19:20:24.359624 | orchestrator | 2025-08-29 19:20:24.359634 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:20:24.359645 | orchestrator | Friday 29 August 2025 19:20:15 +0000 (0:00:00.420) 0:00:29.159 ********* 2025-08-29 19:20:24.359654 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a237b6df-fa80-49e2-8f79-019305f27c2d) 2025-08-29 19:20:24.359664 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a237b6df-fa80-49e2-8f79-019305f27c2d) 2025-08-29 19:20:24.359674 | orchestrator | 2025-08-29 19:20:24.359686 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:20:24.359697 | orchestrator | Friday 29 August 2025 19:20:16 +0000 (0:00:00.431) 0:00:29.591 ********* 2025-08-29 19:20:24.359708 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 19:20:24.359719 | orchestrator | 2025-08-29 19:20:24.359730 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:20:24.359741 | orchestrator | Friday 29 August 2025 19:20:16 +0000 (0:00:00.379) 0:00:29.971 ********* 2025-08-29 19:20:24.359752 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-08-29 19:20:24.359764 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-08-29 19:20:24.359774 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-08-29 19:20:24.359785 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-08-29 19:20:24.359796 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-08-29 19:20:24.359807 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-08-29 19:20:24.359838 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-08-29 19:20:24.359869 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-08-29 19:20:24.359880 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-08-29 19:20:24.359891 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-08-29 19:20:24.359906 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-08-29 19:20:24.359923 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-08-29 19:20:24.359941 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-08-29 19:20:24.359958 | orchestrator | 2025-08-29 19:20:24.359977 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:20:24.359995 | orchestrator | Friday 29 August 2025 19:20:17 +0000 (0:00:00.651) 0:00:30.623 ********* 2025-08-29 19:20:24.360014 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:24.360031 | orchestrator | 2025-08-29 19:20:24.360047 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:20:24.360065 | orchestrator | Friday 29 August 2025 19:20:17 +0000 (0:00:00.217) 0:00:30.841 ********* 2025-08-29 19:20:24.360081 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:24.360097 | orchestrator | 2025-08-29 19:20:24.360112 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:20:24.360128 | orchestrator | Friday 29 August 2025 19:20:17 +0000 (0:00:00.252) 0:00:31.093 ********* 2025-08-29 19:20:24.360144 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:24.360159 | orchestrator | 2025-08-29 19:20:24.360175 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:20:24.360191 | orchestrator | Friday 29 August 2025 19:20:17 +0000 (0:00:00.204) 0:00:31.298 ********* 2025-08-29 19:20:24.360201 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:24.360211 | orchestrator | 2025-08-29 19:20:24.360242 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:20:24.360253 | orchestrator | Friday 29 August 2025 19:20:17 +0000 (0:00:00.202) 0:00:31.500 ********* 2025-08-29 19:20:24.360263 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:24.360272 | orchestrator | 2025-08-29 19:20:24.360282 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:20:24.360291 | orchestrator | Friday 29 August 2025 19:20:18 +0000 (0:00:00.217) 0:00:31.718 ********* 2025-08-29 19:20:24.360301 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:24.360310 | orchestrator | 2025-08-29 19:20:24.360320 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:20:24.360332 | orchestrator | Friday 29 August 2025 19:20:18 +0000 (0:00:00.223) 0:00:31.941 ********* 2025-08-29 19:20:24.360348 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:24.360364 | orchestrator | 2025-08-29 19:20:24.360381 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:20:24.360397 | orchestrator | Friday 29 August 2025 19:20:18 +0000 (0:00:00.220) 0:00:32.162 ********* 2025-08-29 19:20:24.360414 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:24.360432 | orchestrator | 2025-08-29 19:20:24.360449 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:20:24.360467 | orchestrator | Friday 29 August 2025 19:20:18 +0000 (0:00:00.212) 0:00:32.374 ********* 2025-08-29 19:20:24.360482 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-08-29 19:20:24.360500 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-08-29 19:20:24.360510 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-08-29 19:20:24.360551 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-08-29 19:20:24.360561 | orchestrator | 2025-08-29 19:20:24.360572 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:20:24.360581 | orchestrator | Friday 29 August 2025 19:20:19 +0000 (0:00:00.834) 0:00:33.208 ********* 2025-08-29 19:20:24.360607 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:24.360624 | orchestrator | 2025-08-29 19:20:24.360641 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:20:24.360657 | orchestrator | Friday 29 August 2025 19:20:19 +0000 (0:00:00.179) 0:00:33.388 ********* 2025-08-29 19:20:24.360672 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:24.360689 | orchestrator | 2025-08-29 19:20:24.360708 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:20:24.360725 | orchestrator | Friday 29 August 2025 19:20:19 +0000 (0:00:00.178) 0:00:33.567 ********* 2025-08-29 19:20:24.360736 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:24.360745 | orchestrator | 2025-08-29 19:20:24.360755 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:20:24.360764 | orchestrator | Friday 29 August 2025 19:20:20 +0000 (0:00:00.496) 0:00:34.064 ********* 2025-08-29 19:20:24.360774 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:24.360787 | orchestrator | 2025-08-29 19:20:24.360804 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-08-29 19:20:24.360821 | orchestrator | Friday 29 August 2025 19:20:20 +0000 (0:00:00.205) 0:00:34.269 ********* 2025-08-29 19:20:24.360846 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:24.360865 | orchestrator | 2025-08-29 19:20:24.360883 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-08-29 19:20:24.360899 | orchestrator | Friday 29 August 2025 19:20:20 +0000 (0:00:00.126) 0:00:34.396 ********* 2025-08-29 19:20:24.360915 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f946ce78-a8de-59ba-8bf5-045c292b6708'}}) 2025-08-29 19:20:24.360925 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9d878572-29ec-5c6d-9e5c-f341c26bb0e1'}}) 2025-08-29 19:20:24.360942 | orchestrator | 2025-08-29 19:20:24.360958 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-08-29 19:20:24.360975 | orchestrator | Friday 29 August 2025 19:20:21 +0000 (0:00:00.181) 0:00:34.577 ********* 2025-08-29 19:20:24.360993 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f946ce78-a8de-59ba-8bf5-045c292b6708', 'data_vg': 'ceph-f946ce78-a8de-59ba-8bf5-045c292b6708'}) 2025-08-29 19:20:24.361012 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-9d878572-29ec-5c6d-9e5c-f341c26bb0e1', 'data_vg': 'ceph-9d878572-29ec-5c6d-9e5c-f341c26bb0e1'}) 2025-08-29 19:20:24.361030 | orchestrator | 2025-08-29 19:20:24.361047 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-08-29 19:20:24.361065 | orchestrator | Friday 29 August 2025 19:20:22 +0000 (0:00:01.858) 0:00:36.436 ********* 2025-08-29 19:20:24.361082 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f946ce78-a8de-59ba-8bf5-045c292b6708', 'data_vg': 'ceph-f946ce78-a8de-59ba-8bf5-045c292b6708'})  2025-08-29 19:20:24.361100 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9d878572-29ec-5c6d-9e5c-f341c26bb0e1', 'data_vg': 'ceph-9d878572-29ec-5c6d-9e5c-f341c26bb0e1'})  2025-08-29 19:20:24.361118 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:24.361136 | orchestrator | 2025-08-29 19:20:24.361152 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-08-29 19:20:24.361168 | orchestrator | Friday 29 August 2025 19:20:23 +0000 (0:00:00.165) 0:00:36.601 ********* 2025-08-29 19:20:24.361178 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f946ce78-a8de-59ba-8bf5-045c292b6708', 'data_vg': 'ceph-f946ce78-a8de-59ba-8bf5-045c292b6708'}) 2025-08-29 19:20:24.361188 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-9d878572-29ec-5c6d-9e5c-f341c26bb0e1', 'data_vg': 'ceph-9d878572-29ec-5c6d-9e5c-f341c26bb0e1'}) 2025-08-29 19:20:24.361205 | orchestrator | 2025-08-29 19:20:24.361233 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-08-29 19:20:30.160163 | orchestrator | Friday 29 August 2025 19:20:24 +0000 (0:00:01.317) 0:00:37.918 ********* 2025-08-29 19:20:30.160279 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f946ce78-a8de-59ba-8bf5-045c292b6708', 'data_vg': 'ceph-f946ce78-a8de-59ba-8bf5-045c292b6708'})  2025-08-29 19:20:30.160291 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9d878572-29ec-5c6d-9e5c-f341c26bb0e1', 'data_vg': 'ceph-9d878572-29ec-5c6d-9e5c-f341c26bb0e1'})  2025-08-29 19:20:30.160296 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:30.160302 | orchestrator | 2025-08-29 19:20:30.160310 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-08-29 19:20:30.160317 | orchestrator | Friday 29 August 2025 19:20:24 +0000 (0:00:00.162) 0:00:38.081 ********* 2025-08-29 19:20:30.160323 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:30.160328 | orchestrator | 2025-08-29 19:20:30.160335 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-08-29 19:20:30.160341 | orchestrator | Friday 29 August 2025 19:20:24 +0000 (0:00:00.143) 0:00:38.224 ********* 2025-08-29 19:20:30.160348 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f946ce78-a8de-59ba-8bf5-045c292b6708', 'data_vg': 'ceph-f946ce78-a8de-59ba-8bf5-045c292b6708'})  2025-08-29 19:20:30.160353 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9d878572-29ec-5c6d-9e5c-f341c26bb0e1', 'data_vg': 'ceph-9d878572-29ec-5c6d-9e5c-f341c26bb0e1'})  2025-08-29 19:20:30.160359 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:30.160365 | orchestrator | 2025-08-29 19:20:30.160371 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-08-29 19:20:30.160377 | orchestrator | Friday 29 August 2025 19:20:24 +0000 (0:00:00.158) 0:00:38.383 ********* 2025-08-29 19:20:30.160383 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:30.160390 | orchestrator | 2025-08-29 19:20:30.160396 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-08-29 19:20:30.160402 | orchestrator | Friday 29 August 2025 19:20:24 +0000 (0:00:00.144) 0:00:38.528 ********* 2025-08-29 19:20:30.160409 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f946ce78-a8de-59ba-8bf5-045c292b6708', 'data_vg': 'ceph-f946ce78-a8de-59ba-8bf5-045c292b6708'})  2025-08-29 19:20:30.160415 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9d878572-29ec-5c6d-9e5c-f341c26bb0e1', 'data_vg': 'ceph-9d878572-29ec-5c6d-9e5c-f341c26bb0e1'})  2025-08-29 19:20:30.160421 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:30.160428 | orchestrator | 2025-08-29 19:20:30.160434 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-08-29 19:20:30.160438 | orchestrator | Friday 29 August 2025 19:20:25 +0000 (0:00:00.158) 0:00:38.687 ********* 2025-08-29 19:20:30.160454 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:30.160460 | orchestrator | 2025-08-29 19:20:30.160467 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-08-29 19:20:30.160473 | orchestrator | Friday 29 August 2025 19:20:25 +0000 (0:00:00.337) 0:00:39.024 ********* 2025-08-29 19:20:30.160479 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f946ce78-a8de-59ba-8bf5-045c292b6708', 'data_vg': 'ceph-f946ce78-a8de-59ba-8bf5-045c292b6708'})  2025-08-29 19:20:30.160486 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9d878572-29ec-5c6d-9e5c-f341c26bb0e1', 'data_vg': 'ceph-9d878572-29ec-5c6d-9e5c-f341c26bb0e1'})  2025-08-29 19:20:30.160492 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:30.160498 | orchestrator | 2025-08-29 19:20:30.160504 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-08-29 19:20:30.160567 | orchestrator | Friday 29 August 2025 19:20:25 +0000 (0:00:00.167) 0:00:39.192 ********* 2025-08-29 19:20:30.160574 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:20:30.160581 | orchestrator | 2025-08-29 19:20:30.160586 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-08-29 19:20:30.160592 | orchestrator | Friday 29 August 2025 19:20:25 +0000 (0:00:00.160) 0:00:39.352 ********* 2025-08-29 19:20:30.160605 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f946ce78-a8de-59ba-8bf5-045c292b6708', 'data_vg': 'ceph-f946ce78-a8de-59ba-8bf5-045c292b6708'})  2025-08-29 19:20:30.160611 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9d878572-29ec-5c6d-9e5c-f341c26bb0e1', 'data_vg': 'ceph-9d878572-29ec-5c6d-9e5c-f341c26bb0e1'})  2025-08-29 19:20:30.160618 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:30.160624 | orchestrator | 2025-08-29 19:20:30.160630 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-08-29 19:20:30.160635 | orchestrator | Friday 29 August 2025 19:20:25 +0000 (0:00:00.163) 0:00:39.515 ********* 2025-08-29 19:20:30.160641 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f946ce78-a8de-59ba-8bf5-045c292b6708', 'data_vg': 'ceph-f946ce78-a8de-59ba-8bf5-045c292b6708'})  2025-08-29 19:20:30.160647 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9d878572-29ec-5c6d-9e5c-f341c26bb0e1', 'data_vg': 'ceph-9d878572-29ec-5c6d-9e5c-f341c26bb0e1'})  2025-08-29 19:20:30.160652 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:30.160658 | orchestrator | 2025-08-29 19:20:30.160664 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-08-29 19:20:30.160669 | orchestrator | Friday 29 August 2025 19:20:26 +0000 (0:00:00.176) 0:00:39.692 ********* 2025-08-29 19:20:30.160691 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f946ce78-a8de-59ba-8bf5-045c292b6708', 'data_vg': 'ceph-f946ce78-a8de-59ba-8bf5-045c292b6708'})  2025-08-29 19:20:30.160697 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9d878572-29ec-5c6d-9e5c-f341c26bb0e1', 'data_vg': 'ceph-9d878572-29ec-5c6d-9e5c-f341c26bb0e1'})  2025-08-29 19:20:30.160703 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:30.160709 | orchestrator | 2025-08-29 19:20:30.160715 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-08-29 19:20:30.160721 | orchestrator | Friday 29 August 2025 19:20:26 +0000 (0:00:00.158) 0:00:39.850 ********* 2025-08-29 19:20:30.160728 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:30.160734 | orchestrator | 2025-08-29 19:20:30.160741 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-08-29 19:20:30.160748 | orchestrator | Friday 29 August 2025 19:20:26 +0000 (0:00:00.153) 0:00:40.004 ********* 2025-08-29 19:20:30.160754 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:30.160761 | orchestrator | 2025-08-29 19:20:30.160767 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-08-29 19:20:30.160774 | orchestrator | Friday 29 August 2025 19:20:26 +0000 (0:00:00.161) 0:00:40.166 ********* 2025-08-29 19:20:30.160780 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:30.160787 | orchestrator | 2025-08-29 19:20:30.160793 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-08-29 19:20:30.160799 | orchestrator | Friday 29 August 2025 19:20:26 +0000 (0:00:00.167) 0:00:40.333 ********* 2025-08-29 19:20:30.160805 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 19:20:30.160812 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-08-29 19:20:30.160819 | orchestrator | } 2025-08-29 19:20:30.160826 | orchestrator | 2025-08-29 19:20:30.160833 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-08-29 19:20:30.160839 | orchestrator | Friday 29 August 2025 19:20:26 +0000 (0:00:00.133) 0:00:40.467 ********* 2025-08-29 19:20:30.160845 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 19:20:30.160851 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-08-29 19:20:30.160857 | orchestrator | } 2025-08-29 19:20:30.160864 | orchestrator | 2025-08-29 19:20:30.160870 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-08-29 19:20:30.160876 | orchestrator | Friday 29 August 2025 19:20:27 +0000 (0:00:00.161) 0:00:40.628 ********* 2025-08-29 19:20:30.160882 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 19:20:30.160889 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-08-29 19:20:30.160903 | orchestrator | } 2025-08-29 19:20:30.160909 | orchestrator | 2025-08-29 19:20:30.160915 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-08-29 19:20:30.160921 | orchestrator | Friday 29 August 2025 19:20:27 +0000 (0:00:00.154) 0:00:40.782 ********* 2025-08-29 19:20:30.160928 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:20:30.160934 | orchestrator | 2025-08-29 19:20:30.160940 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-08-29 19:20:30.160946 | orchestrator | Friday 29 August 2025 19:20:27 +0000 (0:00:00.727) 0:00:41.509 ********* 2025-08-29 19:20:30.160952 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:20:30.160958 | orchestrator | 2025-08-29 19:20:30.160965 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-08-29 19:20:30.160971 | orchestrator | Friday 29 August 2025 19:20:28 +0000 (0:00:00.589) 0:00:42.099 ********* 2025-08-29 19:20:30.160978 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:20:30.160984 | orchestrator | 2025-08-29 19:20:30.160991 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-08-29 19:20:30.160997 | orchestrator | Friday 29 August 2025 19:20:29 +0000 (0:00:00.506) 0:00:42.606 ********* 2025-08-29 19:20:30.161003 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:20:30.161009 | orchestrator | 2025-08-29 19:20:30.161015 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-08-29 19:20:30.161021 | orchestrator | Friday 29 August 2025 19:20:29 +0000 (0:00:00.168) 0:00:42.774 ********* 2025-08-29 19:20:30.161027 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:30.161034 | orchestrator | 2025-08-29 19:20:30.161041 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-08-29 19:20:30.161047 | orchestrator | Friday 29 August 2025 19:20:29 +0000 (0:00:00.108) 0:00:42.883 ********* 2025-08-29 19:20:30.161061 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:30.161066 | orchestrator | 2025-08-29 19:20:30.161073 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-08-29 19:20:30.161079 | orchestrator | Friday 29 August 2025 19:20:29 +0000 (0:00:00.113) 0:00:42.997 ********* 2025-08-29 19:20:30.161085 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 19:20:30.161091 | orchestrator |  "vgs_report": { 2025-08-29 19:20:30.161097 | orchestrator |  "vg": [] 2025-08-29 19:20:30.161103 | orchestrator |  } 2025-08-29 19:20:30.161110 | orchestrator | } 2025-08-29 19:20:30.161116 | orchestrator | 2025-08-29 19:20:30.161122 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-08-29 19:20:30.161129 | orchestrator | Friday 29 August 2025 19:20:29 +0000 (0:00:00.162) 0:00:43.159 ********* 2025-08-29 19:20:30.161135 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:30.161141 | orchestrator | 2025-08-29 19:20:30.161147 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-08-29 19:20:30.161154 | orchestrator | Friday 29 August 2025 19:20:29 +0000 (0:00:00.139) 0:00:43.299 ********* 2025-08-29 19:20:30.161160 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:30.161166 | orchestrator | 2025-08-29 19:20:30.161173 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-08-29 19:20:30.161179 | orchestrator | Friday 29 August 2025 19:20:29 +0000 (0:00:00.136) 0:00:43.435 ********* 2025-08-29 19:20:30.161186 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:30.161192 | orchestrator | 2025-08-29 19:20:30.161198 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-08-29 19:20:30.161204 | orchestrator | Friday 29 August 2025 19:20:30 +0000 (0:00:00.146) 0:00:43.581 ********* 2025-08-29 19:20:30.161210 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:30.161216 | orchestrator | 2025-08-29 19:20:30.161223 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-08-29 19:20:30.161235 | orchestrator | Friday 29 August 2025 19:20:30 +0000 (0:00:00.143) 0:00:43.724 ********* 2025-08-29 19:20:35.060768 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:35.060887 | orchestrator | 2025-08-29 19:20:35.060929 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-08-29 19:20:35.060943 | orchestrator | Friday 29 August 2025 19:20:30 +0000 (0:00:00.153) 0:00:43.877 ********* 2025-08-29 19:20:35.060954 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:35.060964 | orchestrator | 2025-08-29 19:20:35.060976 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-08-29 19:20:35.060987 | orchestrator | Friday 29 August 2025 19:20:30 +0000 (0:00:00.366) 0:00:44.244 ********* 2025-08-29 19:20:35.060998 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:35.061008 | orchestrator | 2025-08-29 19:20:35.061019 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-08-29 19:20:35.061029 | orchestrator | Friday 29 August 2025 19:20:30 +0000 (0:00:00.147) 0:00:44.391 ********* 2025-08-29 19:20:35.061040 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:35.061051 | orchestrator | 2025-08-29 19:20:35.061061 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-08-29 19:20:35.061072 | orchestrator | Friday 29 August 2025 19:20:30 +0000 (0:00:00.177) 0:00:44.569 ********* 2025-08-29 19:20:35.061083 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:35.061093 | orchestrator | 2025-08-29 19:20:35.061104 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-08-29 19:20:35.061114 | orchestrator | Friday 29 August 2025 19:20:31 +0000 (0:00:00.139) 0:00:44.709 ********* 2025-08-29 19:20:35.061125 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:35.061136 | orchestrator | 2025-08-29 19:20:35.061146 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-08-29 19:20:35.061157 | orchestrator | Friday 29 August 2025 19:20:31 +0000 (0:00:00.145) 0:00:44.854 ********* 2025-08-29 19:20:35.061168 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:35.061178 | orchestrator | 2025-08-29 19:20:35.061189 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-08-29 19:20:35.061199 | orchestrator | Friday 29 August 2025 19:20:31 +0000 (0:00:00.143) 0:00:44.998 ********* 2025-08-29 19:20:35.061210 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:35.061221 | orchestrator | 2025-08-29 19:20:35.061232 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-08-29 19:20:35.061242 | orchestrator | Friday 29 August 2025 19:20:31 +0000 (0:00:00.136) 0:00:45.135 ********* 2025-08-29 19:20:35.061253 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:35.061263 | orchestrator | 2025-08-29 19:20:35.061274 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-08-29 19:20:35.061284 | orchestrator | Friday 29 August 2025 19:20:31 +0000 (0:00:00.146) 0:00:45.281 ********* 2025-08-29 19:20:35.061296 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:35.061309 | orchestrator | 2025-08-29 19:20:35.061322 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-08-29 19:20:35.061334 | orchestrator | Friday 29 August 2025 19:20:31 +0000 (0:00:00.138) 0:00:45.420 ********* 2025-08-29 19:20:35.061363 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f946ce78-a8de-59ba-8bf5-045c292b6708', 'data_vg': 'ceph-f946ce78-a8de-59ba-8bf5-045c292b6708'})  2025-08-29 19:20:35.061378 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9d878572-29ec-5c6d-9e5c-f341c26bb0e1', 'data_vg': 'ceph-9d878572-29ec-5c6d-9e5c-f341c26bb0e1'})  2025-08-29 19:20:35.061391 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:35.061404 | orchestrator | 2025-08-29 19:20:35.061417 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-08-29 19:20:35.061430 | orchestrator | Friday 29 August 2025 19:20:32 +0000 (0:00:00.156) 0:00:45.577 ********* 2025-08-29 19:20:35.061443 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f946ce78-a8de-59ba-8bf5-045c292b6708', 'data_vg': 'ceph-f946ce78-a8de-59ba-8bf5-045c292b6708'})  2025-08-29 19:20:35.061456 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9d878572-29ec-5c6d-9e5c-f341c26bb0e1', 'data_vg': 'ceph-9d878572-29ec-5c6d-9e5c-f341c26bb0e1'})  2025-08-29 19:20:35.061476 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:35.061489 | orchestrator | 2025-08-29 19:20:35.061524 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-08-29 19:20:35.061537 | orchestrator | Friday 29 August 2025 19:20:32 +0000 (0:00:00.161) 0:00:45.738 ********* 2025-08-29 19:20:35.061550 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f946ce78-a8de-59ba-8bf5-045c292b6708', 'data_vg': 'ceph-f946ce78-a8de-59ba-8bf5-045c292b6708'})  2025-08-29 19:20:35.061563 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9d878572-29ec-5c6d-9e5c-f341c26bb0e1', 'data_vg': 'ceph-9d878572-29ec-5c6d-9e5c-f341c26bb0e1'})  2025-08-29 19:20:35.061575 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:35.061588 | orchestrator | 2025-08-29 19:20:35.061601 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-08-29 19:20:35.061613 | orchestrator | Friday 29 August 2025 19:20:32 +0000 (0:00:00.179) 0:00:45.918 ********* 2025-08-29 19:20:35.061626 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f946ce78-a8de-59ba-8bf5-045c292b6708', 'data_vg': 'ceph-f946ce78-a8de-59ba-8bf5-045c292b6708'})  2025-08-29 19:20:35.061639 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9d878572-29ec-5c6d-9e5c-f341c26bb0e1', 'data_vg': 'ceph-9d878572-29ec-5c6d-9e5c-f341c26bb0e1'})  2025-08-29 19:20:35.061652 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:35.061663 | orchestrator | 2025-08-29 19:20:35.061674 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-08-29 19:20:35.061702 | orchestrator | Friday 29 August 2025 19:20:32 +0000 (0:00:00.387) 0:00:46.305 ********* 2025-08-29 19:20:35.061714 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f946ce78-a8de-59ba-8bf5-045c292b6708', 'data_vg': 'ceph-f946ce78-a8de-59ba-8bf5-045c292b6708'})  2025-08-29 19:20:35.061725 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9d878572-29ec-5c6d-9e5c-f341c26bb0e1', 'data_vg': 'ceph-9d878572-29ec-5c6d-9e5c-f341c26bb0e1'})  2025-08-29 19:20:35.061736 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:35.061747 | orchestrator | 2025-08-29 19:20:35.061758 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-08-29 19:20:35.061768 | orchestrator | Friday 29 August 2025 19:20:32 +0000 (0:00:00.168) 0:00:46.474 ********* 2025-08-29 19:20:35.061779 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f946ce78-a8de-59ba-8bf5-045c292b6708', 'data_vg': 'ceph-f946ce78-a8de-59ba-8bf5-045c292b6708'})  2025-08-29 19:20:35.061790 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9d878572-29ec-5c6d-9e5c-f341c26bb0e1', 'data_vg': 'ceph-9d878572-29ec-5c6d-9e5c-f341c26bb0e1'})  2025-08-29 19:20:35.061801 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:35.061812 | orchestrator | 2025-08-29 19:20:35.061823 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-08-29 19:20:35.061834 | orchestrator | Friday 29 August 2025 19:20:33 +0000 (0:00:00.157) 0:00:46.631 ********* 2025-08-29 19:20:35.061844 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f946ce78-a8de-59ba-8bf5-045c292b6708', 'data_vg': 'ceph-f946ce78-a8de-59ba-8bf5-045c292b6708'})  2025-08-29 19:20:35.061855 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9d878572-29ec-5c6d-9e5c-f341c26bb0e1', 'data_vg': 'ceph-9d878572-29ec-5c6d-9e5c-f341c26bb0e1'})  2025-08-29 19:20:35.061866 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:35.061877 | orchestrator | 2025-08-29 19:20:35.061887 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-08-29 19:20:35.061898 | orchestrator | Friday 29 August 2025 19:20:33 +0000 (0:00:00.163) 0:00:46.795 ********* 2025-08-29 19:20:35.061909 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f946ce78-a8de-59ba-8bf5-045c292b6708', 'data_vg': 'ceph-f946ce78-a8de-59ba-8bf5-045c292b6708'})  2025-08-29 19:20:35.061928 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9d878572-29ec-5c6d-9e5c-f341c26bb0e1', 'data_vg': 'ceph-9d878572-29ec-5c6d-9e5c-f341c26bb0e1'})  2025-08-29 19:20:35.061939 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:35.061950 | orchestrator | 2025-08-29 19:20:35.061966 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-08-29 19:20:35.061977 | orchestrator | Friday 29 August 2025 19:20:33 +0000 (0:00:00.158) 0:00:46.953 ********* 2025-08-29 19:20:35.061988 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:20:35.062000 | orchestrator | 2025-08-29 19:20:35.062010 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-08-29 19:20:35.062079 | orchestrator | Friday 29 August 2025 19:20:33 +0000 (0:00:00.507) 0:00:47.461 ********* 2025-08-29 19:20:35.062091 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:20:35.062102 | orchestrator | 2025-08-29 19:20:35.062113 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-08-29 19:20:35.062124 | orchestrator | Friday 29 August 2025 19:20:34 +0000 (0:00:00.515) 0:00:47.977 ********* 2025-08-29 19:20:35.062135 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:20:35.062145 | orchestrator | 2025-08-29 19:20:35.062156 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-08-29 19:20:35.062167 | orchestrator | Friday 29 August 2025 19:20:34 +0000 (0:00:00.143) 0:00:48.120 ********* 2025-08-29 19:20:35.062178 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-9d878572-29ec-5c6d-9e5c-f341c26bb0e1', 'vg_name': 'ceph-9d878572-29ec-5c6d-9e5c-f341c26bb0e1'}) 2025-08-29 19:20:35.062190 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-f946ce78-a8de-59ba-8bf5-045c292b6708', 'vg_name': 'ceph-f946ce78-a8de-59ba-8bf5-045c292b6708'}) 2025-08-29 19:20:35.062201 | orchestrator | 2025-08-29 19:20:35.062211 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-08-29 19:20:35.062222 | orchestrator | Friday 29 August 2025 19:20:34 +0000 (0:00:00.182) 0:00:48.302 ********* 2025-08-29 19:20:35.062233 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f946ce78-a8de-59ba-8bf5-045c292b6708', 'data_vg': 'ceph-f946ce78-a8de-59ba-8bf5-045c292b6708'})  2025-08-29 19:20:35.062244 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9d878572-29ec-5c6d-9e5c-f341c26bb0e1', 'data_vg': 'ceph-9d878572-29ec-5c6d-9e5c-f341c26bb0e1'})  2025-08-29 19:20:35.062255 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:35.062266 | orchestrator | 2025-08-29 19:20:35.062277 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-08-29 19:20:35.062287 | orchestrator | Friday 29 August 2025 19:20:34 +0000 (0:00:00.170) 0:00:48.473 ********* 2025-08-29 19:20:35.062298 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f946ce78-a8de-59ba-8bf5-045c292b6708', 'data_vg': 'ceph-f946ce78-a8de-59ba-8bf5-045c292b6708'})  2025-08-29 19:20:35.062309 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9d878572-29ec-5c6d-9e5c-f341c26bb0e1', 'data_vg': 'ceph-9d878572-29ec-5c6d-9e5c-f341c26bb0e1'})  2025-08-29 19:20:35.062328 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:41.591357 | orchestrator | 2025-08-29 19:20:41.591488 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-08-29 19:20:41.591552 | orchestrator | Friday 29 August 2025 19:20:35 +0000 (0:00:00.152) 0:00:48.625 ********* 2025-08-29 19:20:41.591567 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f946ce78-a8de-59ba-8bf5-045c292b6708', 'data_vg': 'ceph-f946ce78-a8de-59ba-8bf5-045c292b6708'})  2025-08-29 19:20:41.591580 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9d878572-29ec-5c6d-9e5c-f341c26bb0e1', 'data_vg': 'ceph-9d878572-29ec-5c6d-9e5c-f341c26bb0e1'})  2025-08-29 19:20:41.591592 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:20:41.591604 | orchestrator | 2025-08-29 19:20:41.591616 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-08-29 19:20:41.591627 | orchestrator | Friday 29 August 2025 19:20:35 +0000 (0:00:00.149) 0:00:48.775 ********* 2025-08-29 19:20:41.591671 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 19:20:41.591694 | orchestrator |  "lvm_report": { 2025-08-29 19:20:41.591715 | orchestrator |  "lv": [ 2025-08-29 19:20:41.591735 | orchestrator |  { 2025-08-29 19:20:41.591754 | orchestrator |  "lv_name": "osd-block-9d878572-29ec-5c6d-9e5c-f341c26bb0e1", 2025-08-29 19:20:41.591769 | orchestrator |  "vg_name": "ceph-9d878572-29ec-5c6d-9e5c-f341c26bb0e1" 2025-08-29 19:20:41.591780 | orchestrator |  }, 2025-08-29 19:20:41.591791 | orchestrator |  { 2025-08-29 19:20:41.591802 | orchestrator |  "lv_name": "osd-block-f946ce78-a8de-59ba-8bf5-045c292b6708", 2025-08-29 19:20:41.591812 | orchestrator |  "vg_name": "ceph-f946ce78-a8de-59ba-8bf5-045c292b6708" 2025-08-29 19:20:41.591823 | orchestrator |  } 2025-08-29 19:20:41.591833 | orchestrator |  ], 2025-08-29 19:20:41.591844 | orchestrator |  "pv": [ 2025-08-29 19:20:41.591854 | orchestrator |  { 2025-08-29 19:20:41.591865 | orchestrator |  "pv_name": "/dev/sdb", 2025-08-29 19:20:41.591877 | orchestrator |  "vg_name": "ceph-f946ce78-a8de-59ba-8bf5-045c292b6708" 2025-08-29 19:20:41.591890 | orchestrator |  }, 2025-08-29 19:20:41.591902 | orchestrator |  { 2025-08-29 19:20:41.591914 | orchestrator |  "pv_name": "/dev/sdc", 2025-08-29 19:20:41.591926 | orchestrator |  "vg_name": "ceph-9d878572-29ec-5c6d-9e5c-f341c26bb0e1" 2025-08-29 19:20:41.591938 | orchestrator |  } 2025-08-29 19:20:41.591950 | orchestrator |  ] 2025-08-29 19:20:41.591962 | orchestrator |  } 2025-08-29 19:20:41.591974 | orchestrator | } 2025-08-29 19:20:41.591985 | orchestrator | 2025-08-29 19:20:41.591996 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-08-29 19:20:41.592007 | orchestrator | 2025-08-29 19:20:41.592018 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 19:20:41.592029 | orchestrator | Friday 29 August 2025 19:20:35 +0000 (0:00:00.522) 0:00:49.298 ********* 2025-08-29 19:20:41.592040 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-08-29 19:20:41.592051 | orchestrator | 2025-08-29 19:20:41.592063 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 19:20:41.592073 | orchestrator | Friday 29 August 2025 19:20:35 +0000 (0:00:00.224) 0:00:49.522 ********* 2025-08-29 19:20:41.592084 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:20:41.592095 | orchestrator | 2025-08-29 19:20:41.592106 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:20:41.592117 | orchestrator | Friday 29 August 2025 19:20:36 +0000 (0:00:00.251) 0:00:49.774 ********* 2025-08-29 19:20:41.592128 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-08-29 19:20:41.592138 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-08-29 19:20:41.592149 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-08-29 19:20:41.592159 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-08-29 19:20:41.592170 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-08-29 19:20:41.592180 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-08-29 19:20:41.592210 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-08-29 19:20:41.592221 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-08-29 19:20:41.592232 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-08-29 19:20:41.592242 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-08-29 19:20:41.592253 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-08-29 19:20:41.592273 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-08-29 19:20:41.592284 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-08-29 19:20:41.592294 | orchestrator | 2025-08-29 19:20:41.592305 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:20:41.592316 | orchestrator | Friday 29 August 2025 19:20:36 +0000 (0:00:00.507) 0:00:50.281 ********* 2025-08-29 19:20:41.592326 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:41.592342 | orchestrator | 2025-08-29 19:20:41.592353 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:20:41.592363 | orchestrator | Friday 29 August 2025 19:20:36 +0000 (0:00:00.213) 0:00:50.495 ********* 2025-08-29 19:20:41.592374 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:41.592385 | orchestrator | 2025-08-29 19:20:41.592396 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:20:41.592425 | orchestrator | Friday 29 August 2025 19:20:37 +0000 (0:00:00.215) 0:00:50.710 ********* 2025-08-29 19:20:41.592437 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:41.592448 | orchestrator | 2025-08-29 19:20:41.592459 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:20:41.592469 | orchestrator | Friday 29 August 2025 19:20:37 +0000 (0:00:00.212) 0:00:50.922 ********* 2025-08-29 19:20:41.592480 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:41.592490 | orchestrator | 2025-08-29 19:20:41.592546 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:20:41.592557 | orchestrator | Friday 29 August 2025 19:20:37 +0000 (0:00:00.214) 0:00:51.136 ********* 2025-08-29 19:20:41.592568 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:41.592579 | orchestrator | 2025-08-29 19:20:41.592637 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:20:41.592650 | orchestrator | Friday 29 August 2025 19:20:37 +0000 (0:00:00.212) 0:00:51.349 ********* 2025-08-29 19:20:41.592661 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:41.592671 | orchestrator | 2025-08-29 19:20:41.592682 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:20:41.592693 | orchestrator | Friday 29 August 2025 19:20:38 +0000 (0:00:00.648) 0:00:51.997 ********* 2025-08-29 19:20:41.592704 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:41.592714 | orchestrator | 2025-08-29 19:20:41.592725 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:20:41.592736 | orchestrator | Friday 29 August 2025 19:20:38 +0000 (0:00:00.241) 0:00:52.239 ********* 2025-08-29 19:20:41.592746 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:41.592757 | orchestrator | 2025-08-29 19:20:41.592772 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:20:41.592791 | orchestrator | Friday 29 August 2025 19:20:38 +0000 (0:00:00.227) 0:00:52.467 ********* 2025-08-29 19:20:41.592811 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9) 2025-08-29 19:20:41.592831 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9) 2025-08-29 19:20:41.592842 | orchestrator | 2025-08-29 19:20:41.592853 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:20:41.592865 | orchestrator | Friday 29 August 2025 19:20:39 +0000 (0:00:00.424) 0:00:52.891 ********* 2025-08-29 19:20:41.592884 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2e78d78f-49e6-4011-b2d5-d2a9d6a8ed1c) 2025-08-29 19:20:41.592904 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2e78d78f-49e6-4011-b2d5-d2a9d6a8ed1c) 2025-08-29 19:20:41.592920 | orchestrator | 2025-08-29 19:20:41.592931 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:20:41.592942 | orchestrator | Friday 29 August 2025 19:20:39 +0000 (0:00:00.441) 0:00:53.333 ********* 2025-08-29 19:20:41.592967 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4acdbd50-4373-4301-8b9f-e7658d09fe80) 2025-08-29 19:20:41.592982 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4acdbd50-4373-4301-8b9f-e7658d09fe80) 2025-08-29 19:20:41.593002 | orchestrator | 2025-08-29 19:20:41.593020 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:20:41.593031 | orchestrator | Friday 29 August 2025 19:20:40 +0000 (0:00:00.466) 0:00:53.800 ********* 2025-08-29 19:20:41.593042 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_53b41fa5-6534-4646-b4f8-3662ac98ea03) 2025-08-29 19:20:41.593054 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_53b41fa5-6534-4646-b4f8-3662ac98ea03) 2025-08-29 19:20:41.593069 | orchestrator | 2025-08-29 19:20:41.593080 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 19:20:41.593091 | orchestrator | Friday 29 August 2025 19:20:40 +0000 (0:00:00.490) 0:00:54.290 ********* 2025-08-29 19:20:41.593102 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 19:20:41.593113 | orchestrator | 2025-08-29 19:20:41.593124 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:20:41.593135 | orchestrator | Friday 29 August 2025 19:20:41 +0000 (0:00:00.405) 0:00:54.696 ********* 2025-08-29 19:20:41.593145 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-08-29 19:20:41.593156 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-08-29 19:20:41.593167 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-08-29 19:20:41.593187 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-08-29 19:20:41.593206 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-08-29 19:20:41.593225 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-08-29 19:20:41.593238 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-08-29 19:20:41.593248 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-08-29 19:20:41.593260 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-08-29 19:20:41.593281 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-08-29 19:20:41.593302 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-08-29 19:20:41.593334 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-08-29 19:20:50.585725 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-08-29 19:20:50.585844 | orchestrator | 2025-08-29 19:20:50.585862 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:20:50.585875 | orchestrator | Friday 29 August 2025 19:20:41 +0000 (0:00:00.452) 0:00:55.148 ********* 2025-08-29 19:20:50.585887 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:50.585899 | orchestrator | 2025-08-29 19:20:50.585911 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:20:50.585922 | orchestrator | Friday 29 August 2025 19:20:41 +0000 (0:00:00.246) 0:00:55.395 ********* 2025-08-29 19:20:50.585933 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:50.585944 | orchestrator | 2025-08-29 19:20:50.585955 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:20:50.585965 | orchestrator | Friday 29 August 2025 19:20:42 +0000 (0:00:00.221) 0:00:55.616 ********* 2025-08-29 19:20:50.585976 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:50.585986 | orchestrator | 2025-08-29 19:20:50.585997 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:20:50.586089 | orchestrator | Friday 29 August 2025 19:20:42 +0000 (0:00:00.672) 0:00:56.289 ********* 2025-08-29 19:20:50.586103 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:50.586114 | orchestrator | 2025-08-29 19:20:50.586124 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:20:50.586135 | orchestrator | Friday 29 August 2025 19:20:42 +0000 (0:00:00.199) 0:00:56.489 ********* 2025-08-29 19:20:50.586146 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:50.586156 | orchestrator | 2025-08-29 19:20:50.586167 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:20:50.586178 | orchestrator | Friday 29 August 2025 19:20:43 +0000 (0:00:00.231) 0:00:56.720 ********* 2025-08-29 19:20:50.586188 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:50.586199 | orchestrator | 2025-08-29 19:20:50.586209 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:20:50.586220 | orchestrator | Friday 29 August 2025 19:20:43 +0000 (0:00:00.220) 0:00:56.941 ********* 2025-08-29 19:20:50.586230 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:50.586241 | orchestrator | 2025-08-29 19:20:50.586252 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:20:50.586265 | orchestrator | Friday 29 August 2025 19:20:43 +0000 (0:00:00.238) 0:00:57.179 ********* 2025-08-29 19:20:50.586277 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:50.586288 | orchestrator | 2025-08-29 19:20:50.586300 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:20:50.586312 | orchestrator | Friday 29 August 2025 19:20:43 +0000 (0:00:00.197) 0:00:57.377 ********* 2025-08-29 19:20:50.586323 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-08-29 19:20:50.586336 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-08-29 19:20:50.586363 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-08-29 19:20:50.586375 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-08-29 19:20:50.586387 | orchestrator | 2025-08-29 19:20:50.586399 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:20:50.586412 | orchestrator | Friday 29 August 2025 19:20:44 +0000 (0:00:00.642) 0:00:58.019 ********* 2025-08-29 19:20:50.586423 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:50.586435 | orchestrator | 2025-08-29 19:20:50.586446 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:20:50.586458 | orchestrator | Friday 29 August 2025 19:20:44 +0000 (0:00:00.208) 0:00:58.228 ********* 2025-08-29 19:20:50.586470 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:50.586518 | orchestrator | 2025-08-29 19:20:50.586531 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:20:50.586543 | orchestrator | Friday 29 August 2025 19:20:44 +0000 (0:00:00.194) 0:00:58.422 ********* 2025-08-29 19:20:50.586556 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:50.586567 | orchestrator | 2025-08-29 19:20:50.586580 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 19:20:50.586591 | orchestrator | Friday 29 August 2025 19:20:45 +0000 (0:00:00.198) 0:00:58.621 ********* 2025-08-29 19:20:50.586604 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:50.586615 | orchestrator | 2025-08-29 19:20:50.586626 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-08-29 19:20:50.586637 | orchestrator | Friday 29 August 2025 19:20:45 +0000 (0:00:00.194) 0:00:58.816 ********* 2025-08-29 19:20:50.586647 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:50.586658 | orchestrator | 2025-08-29 19:20:50.586668 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-08-29 19:20:50.586679 | orchestrator | Friday 29 August 2025 19:20:45 +0000 (0:00:00.363) 0:00:59.179 ********* 2025-08-29 19:20:50.586690 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd29334ae-dac4-5c8b-9540-76ee60da5ca1'}}) 2025-08-29 19:20:50.586701 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '916dc454-8beb-55d0-b00a-22c96f7025a6'}}) 2025-08-29 19:20:50.586721 | orchestrator | 2025-08-29 19:20:50.586732 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-08-29 19:20:50.586743 | orchestrator | Friday 29 August 2025 19:20:45 +0000 (0:00:00.187) 0:00:59.367 ********* 2025-08-29 19:20:50.586755 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d29334ae-dac4-5c8b-9540-76ee60da5ca1', 'data_vg': 'ceph-d29334ae-dac4-5c8b-9540-76ee60da5ca1'}) 2025-08-29 19:20:50.586767 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-916dc454-8beb-55d0-b00a-22c96f7025a6', 'data_vg': 'ceph-916dc454-8beb-55d0-b00a-22c96f7025a6'}) 2025-08-29 19:20:50.586778 | orchestrator | 2025-08-29 19:20:50.586789 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-08-29 19:20:50.586817 | orchestrator | Friday 29 August 2025 19:20:47 +0000 (0:00:01.783) 0:01:01.151 ********* 2025-08-29 19:20:50.586829 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d29334ae-dac4-5c8b-9540-76ee60da5ca1', 'data_vg': 'ceph-d29334ae-dac4-5c8b-9540-76ee60da5ca1'})  2025-08-29 19:20:50.586841 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-916dc454-8beb-55d0-b00a-22c96f7025a6', 'data_vg': 'ceph-916dc454-8beb-55d0-b00a-22c96f7025a6'})  2025-08-29 19:20:50.586852 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:50.586862 | orchestrator | 2025-08-29 19:20:50.586873 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-08-29 19:20:50.586884 | orchestrator | Friday 29 August 2025 19:20:47 +0000 (0:00:00.151) 0:01:01.302 ********* 2025-08-29 19:20:50.586895 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d29334ae-dac4-5c8b-9540-76ee60da5ca1', 'data_vg': 'ceph-d29334ae-dac4-5c8b-9540-76ee60da5ca1'}) 2025-08-29 19:20:50.586906 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-916dc454-8beb-55d0-b00a-22c96f7025a6', 'data_vg': 'ceph-916dc454-8beb-55d0-b00a-22c96f7025a6'}) 2025-08-29 19:20:50.586917 | orchestrator | 2025-08-29 19:20:50.586928 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-08-29 19:20:50.586938 | orchestrator | Friday 29 August 2025 19:20:48 +0000 (0:00:01.251) 0:01:02.553 ********* 2025-08-29 19:20:50.586949 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d29334ae-dac4-5c8b-9540-76ee60da5ca1', 'data_vg': 'ceph-d29334ae-dac4-5c8b-9540-76ee60da5ca1'})  2025-08-29 19:20:50.586960 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-916dc454-8beb-55d0-b00a-22c96f7025a6', 'data_vg': 'ceph-916dc454-8beb-55d0-b00a-22c96f7025a6'})  2025-08-29 19:20:50.586971 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:50.586982 | orchestrator | 2025-08-29 19:20:50.586992 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-08-29 19:20:50.587003 | orchestrator | Friday 29 August 2025 19:20:49 +0000 (0:00:00.164) 0:01:02.718 ********* 2025-08-29 19:20:50.587014 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:50.587025 | orchestrator | 2025-08-29 19:20:50.587035 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-08-29 19:20:50.587046 | orchestrator | Friday 29 August 2025 19:20:49 +0000 (0:00:00.133) 0:01:02.851 ********* 2025-08-29 19:20:50.587057 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d29334ae-dac4-5c8b-9540-76ee60da5ca1', 'data_vg': 'ceph-d29334ae-dac4-5c8b-9540-76ee60da5ca1'})  2025-08-29 19:20:50.587074 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-916dc454-8beb-55d0-b00a-22c96f7025a6', 'data_vg': 'ceph-916dc454-8beb-55d0-b00a-22c96f7025a6'})  2025-08-29 19:20:50.587085 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:50.587096 | orchestrator | 2025-08-29 19:20:50.587106 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-08-29 19:20:50.587117 | orchestrator | Friday 29 August 2025 19:20:49 +0000 (0:00:00.153) 0:01:03.004 ********* 2025-08-29 19:20:50.587128 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:50.587147 | orchestrator | 2025-08-29 19:20:50.587157 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-08-29 19:20:50.587168 | orchestrator | Friday 29 August 2025 19:20:49 +0000 (0:00:00.146) 0:01:03.150 ********* 2025-08-29 19:20:50.587179 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d29334ae-dac4-5c8b-9540-76ee60da5ca1', 'data_vg': 'ceph-d29334ae-dac4-5c8b-9540-76ee60da5ca1'})  2025-08-29 19:20:50.587190 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-916dc454-8beb-55d0-b00a-22c96f7025a6', 'data_vg': 'ceph-916dc454-8beb-55d0-b00a-22c96f7025a6'})  2025-08-29 19:20:50.587200 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:50.587211 | orchestrator | 2025-08-29 19:20:50.587222 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-08-29 19:20:50.587233 | orchestrator | Friday 29 August 2025 19:20:49 +0000 (0:00:00.161) 0:01:03.312 ********* 2025-08-29 19:20:50.587243 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:50.587254 | orchestrator | 2025-08-29 19:20:50.587265 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-08-29 19:20:50.587275 | orchestrator | Friday 29 August 2025 19:20:49 +0000 (0:00:00.160) 0:01:03.472 ********* 2025-08-29 19:20:50.587286 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d29334ae-dac4-5c8b-9540-76ee60da5ca1', 'data_vg': 'ceph-d29334ae-dac4-5c8b-9540-76ee60da5ca1'})  2025-08-29 19:20:50.587297 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-916dc454-8beb-55d0-b00a-22c96f7025a6', 'data_vg': 'ceph-916dc454-8beb-55d0-b00a-22c96f7025a6'})  2025-08-29 19:20:50.587308 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:50.587318 | orchestrator | 2025-08-29 19:20:50.587329 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-08-29 19:20:50.587340 | orchestrator | Friday 29 August 2025 19:20:50 +0000 (0:00:00.153) 0:01:03.626 ********* 2025-08-29 19:20:50.587351 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:20:50.587361 | orchestrator | 2025-08-29 19:20:50.587372 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-08-29 19:20:50.587383 | orchestrator | Friday 29 August 2025 19:20:50 +0000 (0:00:00.368) 0:01:03.994 ********* 2025-08-29 19:20:50.587401 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d29334ae-dac4-5c8b-9540-76ee60da5ca1', 'data_vg': 'ceph-d29334ae-dac4-5c8b-9540-76ee60da5ca1'})  2025-08-29 19:20:56.857034 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-916dc454-8beb-55d0-b00a-22c96f7025a6', 'data_vg': 'ceph-916dc454-8beb-55d0-b00a-22c96f7025a6'})  2025-08-29 19:20:56.857146 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:56.857161 | orchestrator | 2025-08-29 19:20:56.857173 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-08-29 19:20:56.857185 | orchestrator | Friday 29 August 2025 19:20:50 +0000 (0:00:00.156) 0:01:04.151 ********* 2025-08-29 19:20:56.857196 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d29334ae-dac4-5c8b-9540-76ee60da5ca1', 'data_vg': 'ceph-d29334ae-dac4-5c8b-9540-76ee60da5ca1'})  2025-08-29 19:20:56.857207 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-916dc454-8beb-55d0-b00a-22c96f7025a6', 'data_vg': 'ceph-916dc454-8beb-55d0-b00a-22c96f7025a6'})  2025-08-29 19:20:56.857216 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:56.857227 | orchestrator | 2025-08-29 19:20:56.857237 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-08-29 19:20:56.857247 | orchestrator | Friday 29 August 2025 19:20:50 +0000 (0:00:00.150) 0:01:04.301 ********* 2025-08-29 19:20:56.857257 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d29334ae-dac4-5c8b-9540-76ee60da5ca1', 'data_vg': 'ceph-d29334ae-dac4-5c8b-9540-76ee60da5ca1'})  2025-08-29 19:20:56.857267 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-916dc454-8beb-55d0-b00a-22c96f7025a6', 'data_vg': 'ceph-916dc454-8beb-55d0-b00a-22c96f7025a6'})  2025-08-29 19:20:56.857277 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:56.857306 | orchestrator | 2025-08-29 19:20:56.857317 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-08-29 19:20:56.857326 | orchestrator | Friday 29 August 2025 19:20:50 +0000 (0:00:00.168) 0:01:04.470 ********* 2025-08-29 19:20:56.857336 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:56.857346 | orchestrator | 2025-08-29 19:20:56.857355 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-08-29 19:20:56.857365 | orchestrator | Friday 29 August 2025 19:20:51 +0000 (0:00:00.159) 0:01:04.630 ********* 2025-08-29 19:20:56.857374 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:56.857384 | orchestrator | 2025-08-29 19:20:56.857394 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-08-29 19:20:56.857403 | orchestrator | Friday 29 August 2025 19:20:51 +0000 (0:00:00.145) 0:01:04.775 ********* 2025-08-29 19:20:56.857413 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:56.857422 | orchestrator | 2025-08-29 19:20:56.857432 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-08-29 19:20:56.857441 | orchestrator | Friday 29 August 2025 19:20:51 +0000 (0:00:00.135) 0:01:04.911 ********* 2025-08-29 19:20:56.857451 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 19:20:56.857461 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-08-29 19:20:56.857543 | orchestrator | } 2025-08-29 19:20:56.857557 | orchestrator | 2025-08-29 19:20:56.857567 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-08-29 19:20:56.857579 | orchestrator | Friday 29 August 2025 19:20:51 +0000 (0:00:00.148) 0:01:05.059 ********* 2025-08-29 19:20:56.857590 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 19:20:56.857602 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-08-29 19:20:56.857613 | orchestrator | } 2025-08-29 19:20:56.857624 | orchestrator | 2025-08-29 19:20:56.857635 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-08-29 19:20:56.857647 | orchestrator | Friday 29 August 2025 19:20:51 +0000 (0:00:00.162) 0:01:05.222 ********* 2025-08-29 19:20:56.857658 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 19:20:56.857669 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-08-29 19:20:56.857680 | orchestrator | } 2025-08-29 19:20:56.857691 | orchestrator | 2025-08-29 19:20:56.857702 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-08-29 19:20:56.857712 | orchestrator | Friday 29 August 2025 19:20:51 +0000 (0:00:00.159) 0:01:05.382 ********* 2025-08-29 19:20:56.857722 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:20:56.857732 | orchestrator | 2025-08-29 19:20:56.857742 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-08-29 19:20:56.857752 | orchestrator | Friday 29 August 2025 19:20:52 +0000 (0:00:00.498) 0:01:05.880 ********* 2025-08-29 19:20:56.857762 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:20:56.857772 | orchestrator | 2025-08-29 19:20:56.857782 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-08-29 19:20:56.857792 | orchestrator | Friday 29 August 2025 19:20:52 +0000 (0:00:00.518) 0:01:06.399 ********* 2025-08-29 19:20:56.857802 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:20:56.857812 | orchestrator | 2025-08-29 19:20:56.857822 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-08-29 19:20:56.857832 | orchestrator | Friday 29 August 2025 19:20:53 +0000 (0:00:00.721) 0:01:07.120 ********* 2025-08-29 19:20:56.857842 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:20:56.857852 | orchestrator | 2025-08-29 19:20:56.857863 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-08-29 19:20:56.857873 | orchestrator | Friday 29 August 2025 19:20:53 +0000 (0:00:00.146) 0:01:07.267 ********* 2025-08-29 19:20:56.857883 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:56.857893 | orchestrator | 2025-08-29 19:20:56.857903 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-08-29 19:20:56.857913 | orchestrator | Friday 29 August 2025 19:20:53 +0000 (0:00:00.116) 0:01:07.384 ********* 2025-08-29 19:20:56.857930 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:56.857940 | orchestrator | 2025-08-29 19:20:56.857950 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-08-29 19:20:56.857960 | orchestrator | Friday 29 August 2025 19:20:53 +0000 (0:00:00.120) 0:01:07.504 ********* 2025-08-29 19:20:56.857970 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 19:20:56.857998 | orchestrator |  "vgs_report": { 2025-08-29 19:20:56.858009 | orchestrator |  "vg": [] 2025-08-29 19:20:56.858088 | orchestrator |  } 2025-08-29 19:20:56.858100 | orchestrator | } 2025-08-29 19:20:56.858110 | orchestrator | 2025-08-29 19:20:56.858119 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-08-29 19:20:56.858129 | orchestrator | Friday 29 August 2025 19:20:54 +0000 (0:00:00.157) 0:01:07.661 ********* 2025-08-29 19:20:56.858139 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:56.858148 | orchestrator | 2025-08-29 19:20:56.858158 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-08-29 19:20:56.858167 | orchestrator | Friday 29 August 2025 19:20:54 +0000 (0:00:00.142) 0:01:07.803 ********* 2025-08-29 19:20:56.858177 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:56.858187 | orchestrator | 2025-08-29 19:20:56.858196 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-08-29 19:20:56.858206 | orchestrator | Friday 29 August 2025 19:20:54 +0000 (0:00:00.138) 0:01:07.941 ********* 2025-08-29 19:20:56.858216 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:56.858225 | orchestrator | 2025-08-29 19:20:56.858235 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-08-29 19:20:56.858245 | orchestrator | Friday 29 August 2025 19:20:54 +0000 (0:00:00.136) 0:01:08.078 ********* 2025-08-29 19:20:56.858254 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:56.858264 | orchestrator | 2025-08-29 19:20:56.858273 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-08-29 19:20:56.858283 | orchestrator | Friday 29 August 2025 19:20:54 +0000 (0:00:00.137) 0:01:08.215 ********* 2025-08-29 19:20:56.858293 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:56.858302 | orchestrator | 2025-08-29 19:20:56.858312 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-08-29 19:20:56.858321 | orchestrator | Friday 29 August 2025 19:20:54 +0000 (0:00:00.134) 0:01:08.350 ********* 2025-08-29 19:20:56.858331 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:56.858341 | orchestrator | 2025-08-29 19:20:56.858350 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-08-29 19:20:56.858360 | orchestrator | Friday 29 August 2025 19:20:54 +0000 (0:00:00.151) 0:01:08.501 ********* 2025-08-29 19:20:56.858370 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:56.858379 | orchestrator | 2025-08-29 19:20:56.858388 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-08-29 19:20:56.858398 | orchestrator | Friday 29 August 2025 19:20:55 +0000 (0:00:00.137) 0:01:08.639 ********* 2025-08-29 19:20:56.858408 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:56.858417 | orchestrator | 2025-08-29 19:20:56.858427 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-08-29 19:20:56.858436 | orchestrator | Friday 29 August 2025 19:20:55 +0000 (0:00:00.144) 0:01:08.783 ********* 2025-08-29 19:20:56.858446 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:56.858455 | orchestrator | 2025-08-29 19:20:56.858465 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-08-29 19:20:56.858511 | orchestrator | Friday 29 August 2025 19:20:55 +0000 (0:00:00.346) 0:01:09.130 ********* 2025-08-29 19:20:56.858529 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:56.858547 | orchestrator | 2025-08-29 19:20:56.858563 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-08-29 19:20:56.858574 | orchestrator | Friday 29 August 2025 19:20:55 +0000 (0:00:00.154) 0:01:09.285 ********* 2025-08-29 19:20:56.858583 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:56.858601 | orchestrator | 2025-08-29 19:20:56.858611 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-08-29 19:20:56.858620 | orchestrator | Friday 29 August 2025 19:20:55 +0000 (0:00:00.142) 0:01:09.427 ********* 2025-08-29 19:20:56.858630 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:56.858640 | orchestrator | 2025-08-29 19:20:56.858649 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-08-29 19:20:56.858659 | orchestrator | Friday 29 August 2025 19:20:55 +0000 (0:00:00.141) 0:01:09.569 ********* 2025-08-29 19:20:56.858668 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:56.858678 | orchestrator | 2025-08-29 19:20:56.858687 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-08-29 19:20:56.858697 | orchestrator | Friday 29 August 2025 19:20:56 +0000 (0:00:00.154) 0:01:09.724 ********* 2025-08-29 19:20:56.858706 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:56.858716 | orchestrator | 2025-08-29 19:20:56.858725 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-08-29 19:20:56.858735 | orchestrator | Friday 29 August 2025 19:20:56 +0000 (0:00:00.167) 0:01:09.892 ********* 2025-08-29 19:20:56.858744 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d29334ae-dac4-5c8b-9540-76ee60da5ca1', 'data_vg': 'ceph-d29334ae-dac4-5c8b-9540-76ee60da5ca1'})  2025-08-29 19:20:56.858754 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-916dc454-8beb-55d0-b00a-22c96f7025a6', 'data_vg': 'ceph-916dc454-8beb-55d0-b00a-22c96f7025a6'})  2025-08-29 19:20:56.858764 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:56.858773 | orchestrator | 2025-08-29 19:20:56.858783 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-08-29 19:20:56.858792 | orchestrator | Friday 29 August 2025 19:20:56 +0000 (0:00:00.178) 0:01:10.070 ********* 2025-08-29 19:20:56.858802 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d29334ae-dac4-5c8b-9540-76ee60da5ca1', 'data_vg': 'ceph-d29334ae-dac4-5c8b-9540-76ee60da5ca1'})  2025-08-29 19:20:56.858812 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-916dc454-8beb-55d0-b00a-22c96f7025a6', 'data_vg': 'ceph-916dc454-8beb-55d0-b00a-22c96f7025a6'})  2025-08-29 19:20:56.858821 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:56.858831 | orchestrator | 2025-08-29 19:20:56.858840 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-08-29 19:20:56.858850 | orchestrator | Friday 29 August 2025 19:20:56 +0000 (0:00:00.174) 0:01:10.245 ********* 2025-08-29 19:20:56.858867 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d29334ae-dac4-5c8b-9540-76ee60da5ca1', 'data_vg': 'ceph-d29334ae-dac4-5c8b-9540-76ee60da5ca1'})  2025-08-29 19:20:59.947463 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-916dc454-8beb-55d0-b00a-22c96f7025a6', 'data_vg': 'ceph-916dc454-8beb-55d0-b00a-22c96f7025a6'})  2025-08-29 19:20:59.947608 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:59.947619 | orchestrator | 2025-08-29 19:20:59.947656 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-08-29 19:20:59.947667 | orchestrator | Friday 29 August 2025 19:20:56 +0000 (0:00:00.173) 0:01:10.418 ********* 2025-08-29 19:20:59.947674 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d29334ae-dac4-5c8b-9540-76ee60da5ca1', 'data_vg': 'ceph-d29334ae-dac4-5c8b-9540-76ee60da5ca1'})  2025-08-29 19:20:59.947681 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-916dc454-8beb-55d0-b00a-22c96f7025a6', 'data_vg': 'ceph-916dc454-8beb-55d0-b00a-22c96f7025a6'})  2025-08-29 19:20:59.947688 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:59.947694 | orchestrator | 2025-08-29 19:20:59.947701 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-08-29 19:20:59.947707 | orchestrator | Friday 29 August 2025 19:20:57 +0000 (0:00:00.159) 0:01:10.578 ********* 2025-08-29 19:20:59.947713 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d29334ae-dac4-5c8b-9540-76ee60da5ca1', 'data_vg': 'ceph-d29334ae-dac4-5c8b-9540-76ee60da5ca1'})  2025-08-29 19:20:59.947738 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-916dc454-8beb-55d0-b00a-22c96f7025a6', 'data_vg': 'ceph-916dc454-8beb-55d0-b00a-22c96f7025a6'})  2025-08-29 19:20:59.947746 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:59.947752 | orchestrator | 2025-08-29 19:20:59.947758 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-08-29 19:20:59.947764 | orchestrator | Friday 29 August 2025 19:20:57 +0000 (0:00:00.164) 0:01:10.743 ********* 2025-08-29 19:20:59.947771 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d29334ae-dac4-5c8b-9540-76ee60da5ca1', 'data_vg': 'ceph-d29334ae-dac4-5c8b-9540-76ee60da5ca1'})  2025-08-29 19:20:59.947777 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-916dc454-8beb-55d0-b00a-22c96f7025a6', 'data_vg': 'ceph-916dc454-8beb-55d0-b00a-22c96f7025a6'})  2025-08-29 19:20:59.947783 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:59.947789 | orchestrator | 2025-08-29 19:20:59.947808 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-08-29 19:20:59.947815 | orchestrator | Friday 29 August 2025 19:20:57 +0000 (0:00:00.160) 0:01:10.903 ********* 2025-08-29 19:20:59.947821 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d29334ae-dac4-5c8b-9540-76ee60da5ca1', 'data_vg': 'ceph-d29334ae-dac4-5c8b-9540-76ee60da5ca1'})  2025-08-29 19:20:59.947827 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-916dc454-8beb-55d0-b00a-22c96f7025a6', 'data_vg': 'ceph-916dc454-8beb-55d0-b00a-22c96f7025a6'})  2025-08-29 19:20:59.947833 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:59.947840 | orchestrator | 2025-08-29 19:20:59.947846 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-08-29 19:20:59.947852 | orchestrator | Friday 29 August 2025 19:20:57 +0000 (0:00:00.381) 0:01:11.284 ********* 2025-08-29 19:20:59.947859 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d29334ae-dac4-5c8b-9540-76ee60da5ca1', 'data_vg': 'ceph-d29334ae-dac4-5c8b-9540-76ee60da5ca1'})  2025-08-29 19:20:59.947865 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-916dc454-8beb-55d0-b00a-22c96f7025a6', 'data_vg': 'ceph-916dc454-8beb-55d0-b00a-22c96f7025a6'})  2025-08-29 19:20:59.947871 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:59.947877 | orchestrator | 2025-08-29 19:20:59.947884 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-08-29 19:20:59.947890 | orchestrator | Friday 29 August 2025 19:20:57 +0000 (0:00:00.173) 0:01:11.458 ********* 2025-08-29 19:20:59.947896 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:20:59.947903 | orchestrator | 2025-08-29 19:20:59.947909 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-08-29 19:20:59.947916 | orchestrator | Friday 29 August 2025 19:20:58 +0000 (0:00:00.523) 0:01:11.981 ********* 2025-08-29 19:20:59.947922 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:20:59.947928 | orchestrator | 2025-08-29 19:20:59.947934 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-08-29 19:20:59.947941 | orchestrator | Friday 29 August 2025 19:20:58 +0000 (0:00:00.519) 0:01:12.501 ********* 2025-08-29 19:20:59.947947 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:20:59.947953 | orchestrator | 2025-08-29 19:20:59.947959 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-08-29 19:20:59.947966 | orchestrator | Friday 29 August 2025 19:20:59 +0000 (0:00:00.149) 0:01:12.650 ********* 2025-08-29 19:20:59.947972 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-916dc454-8beb-55d0-b00a-22c96f7025a6', 'vg_name': 'ceph-916dc454-8beb-55d0-b00a-22c96f7025a6'}) 2025-08-29 19:20:59.947979 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-d29334ae-dac4-5c8b-9540-76ee60da5ca1', 'vg_name': 'ceph-d29334ae-dac4-5c8b-9540-76ee60da5ca1'}) 2025-08-29 19:20:59.947985 | orchestrator | 2025-08-29 19:20:59.947991 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-08-29 19:20:59.948002 | orchestrator | Friday 29 August 2025 19:20:59 +0000 (0:00:00.183) 0:01:12.833 ********* 2025-08-29 19:20:59.948024 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d29334ae-dac4-5c8b-9540-76ee60da5ca1', 'data_vg': 'ceph-d29334ae-dac4-5c8b-9540-76ee60da5ca1'})  2025-08-29 19:20:59.948030 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-916dc454-8beb-55d0-b00a-22c96f7025a6', 'data_vg': 'ceph-916dc454-8beb-55d0-b00a-22c96f7025a6'})  2025-08-29 19:20:59.948036 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:59.948043 | orchestrator | 2025-08-29 19:20:59.948050 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-08-29 19:20:59.948056 | orchestrator | Friday 29 August 2025 19:20:59 +0000 (0:00:00.164) 0:01:12.998 ********* 2025-08-29 19:20:59.948062 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d29334ae-dac4-5c8b-9540-76ee60da5ca1', 'data_vg': 'ceph-d29334ae-dac4-5c8b-9540-76ee60da5ca1'})  2025-08-29 19:20:59.948069 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-916dc454-8beb-55d0-b00a-22c96f7025a6', 'data_vg': 'ceph-916dc454-8beb-55d0-b00a-22c96f7025a6'})  2025-08-29 19:20:59.948076 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:59.948082 | orchestrator | 2025-08-29 19:20:59.948089 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-08-29 19:20:59.948095 | orchestrator | Friday 29 August 2025 19:20:59 +0000 (0:00:00.185) 0:01:13.183 ********* 2025-08-29 19:20:59.948102 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d29334ae-dac4-5c8b-9540-76ee60da5ca1', 'data_vg': 'ceph-d29334ae-dac4-5c8b-9540-76ee60da5ca1'})  2025-08-29 19:20:59.948108 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-916dc454-8beb-55d0-b00a-22c96f7025a6', 'data_vg': 'ceph-916dc454-8beb-55d0-b00a-22c96f7025a6'})  2025-08-29 19:20:59.948115 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:20:59.948121 | orchestrator | 2025-08-29 19:20:59.948128 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-08-29 19:20:59.948134 | orchestrator | Friday 29 August 2025 19:20:59 +0000 (0:00:00.151) 0:01:13.334 ********* 2025-08-29 19:20:59.948141 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 19:20:59.948147 | orchestrator |  "lvm_report": { 2025-08-29 19:20:59.948153 | orchestrator |  "lv": [ 2025-08-29 19:20:59.948160 | orchestrator |  { 2025-08-29 19:20:59.948167 | orchestrator |  "lv_name": "osd-block-916dc454-8beb-55d0-b00a-22c96f7025a6", 2025-08-29 19:20:59.948178 | orchestrator |  "vg_name": "ceph-916dc454-8beb-55d0-b00a-22c96f7025a6" 2025-08-29 19:20:59.948185 | orchestrator |  }, 2025-08-29 19:20:59.948191 | orchestrator |  { 2025-08-29 19:20:59.948197 | orchestrator |  "lv_name": "osd-block-d29334ae-dac4-5c8b-9540-76ee60da5ca1", 2025-08-29 19:20:59.948204 | orchestrator |  "vg_name": "ceph-d29334ae-dac4-5c8b-9540-76ee60da5ca1" 2025-08-29 19:20:59.948210 | orchestrator |  } 2025-08-29 19:20:59.948217 | orchestrator |  ], 2025-08-29 19:20:59.948224 | orchestrator |  "pv": [ 2025-08-29 19:20:59.948231 | orchestrator |  { 2025-08-29 19:20:59.948237 | orchestrator |  "pv_name": "/dev/sdb", 2025-08-29 19:20:59.948243 | orchestrator |  "vg_name": "ceph-d29334ae-dac4-5c8b-9540-76ee60da5ca1" 2025-08-29 19:20:59.948250 | orchestrator |  }, 2025-08-29 19:20:59.948256 | orchestrator |  { 2025-08-29 19:20:59.948262 | orchestrator |  "pv_name": "/dev/sdc", 2025-08-29 19:20:59.948269 | orchestrator |  "vg_name": "ceph-916dc454-8beb-55d0-b00a-22c96f7025a6" 2025-08-29 19:20:59.948275 | orchestrator |  } 2025-08-29 19:20:59.948282 | orchestrator |  ] 2025-08-29 19:20:59.948288 | orchestrator |  } 2025-08-29 19:20:59.948295 | orchestrator | } 2025-08-29 19:20:59.948302 | orchestrator | 2025-08-29 19:20:59.948306 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:20:59.948314 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-08-29 19:20:59.948319 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-08-29 19:20:59.948323 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-08-29 19:20:59.948327 | orchestrator | 2025-08-29 19:20:59.948331 | orchestrator | 2025-08-29 19:20:59.948335 | orchestrator | 2025-08-29 19:20:59.948340 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:20:59.948344 | orchestrator | Friday 29 August 2025 19:20:59 +0000 (0:00:00.153) 0:01:13.487 ********* 2025-08-29 19:20:59.948348 | orchestrator | =============================================================================== 2025-08-29 19:20:59.948352 | orchestrator | Create block VGs -------------------------------------------------------- 5.59s 2025-08-29 19:20:59.948357 | orchestrator | Create block LVs -------------------------------------------------------- 3.95s 2025-08-29 19:20:59.948361 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.90s 2025-08-29 19:20:59.948365 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.70s 2025-08-29 19:20:59.948370 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.62s 2025-08-29 19:20:59.948374 | orchestrator | Add known partitions to the list of available block devices ------------- 1.57s 2025-08-29 19:20:59.948378 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.51s 2025-08-29 19:20:59.948382 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.49s 2025-08-29 19:20:59.948390 | orchestrator | Add known links to the list of available block devices ------------------ 1.30s 2025-08-29 19:21:00.383419 | orchestrator | Add known partitions to the list of available block devices ------------- 1.29s 2025-08-29 19:21:00.383532 | orchestrator | Print LVM report data --------------------------------------------------- 0.93s 2025-08-29 19:21:00.383540 | orchestrator | Add known links to the list of available block devices ------------------ 0.90s 2025-08-29 19:21:00.383546 | orchestrator | Add known partitions to the list of available block devices ------------- 0.83s 2025-08-29 19:21:00.383551 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.74s 2025-08-29 19:21:00.383557 | orchestrator | Get initial list of available block devices ----------------------------- 0.72s 2025-08-29 19:21:00.383562 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.71s 2025-08-29 19:21:00.383568 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.70s 2025-08-29 19:21:00.383573 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.70s 2025-08-29 19:21:00.383578 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.69s 2025-08-29 19:21:00.383584 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2025-08-29 19:21:12.697949 | orchestrator | 2025-08-29 19:21:12 | INFO  | Task 5992a98a-218d-45af-93c0-6ebb9e24a6c4 (facts) was prepared for execution. 2025-08-29 19:21:12.698109 | orchestrator | 2025-08-29 19:21:12 | INFO  | It takes a moment until task 5992a98a-218d-45af-93c0-6ebb9e24a6c4 (facts) has been started and output is visible here. 2025-08-29 19:21:24.619361 | orchestrator | 2025-08-29 19:21:24.619515 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-08-29 19:21:24.619528 | orchestrator | 2025-08-29 19:21:24.619545 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-08-29 19:21:24.619583 | orchestrator | Friday 29 August 2025 19:21:16 +0000 (0:00:00.285) 0:00:00.285 ********* 2025-08-29 19:21:24.619592 | orchestrator | ok: [testbed-manager] 2025-08-29 19:21:24.619600 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:21:24.619622 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:21:24.619629 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:21:24.619635 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:21:24.619641 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:21:24.619647 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:21:24.619653 | orchestrator | 2025-08-29 19:21:24.619660 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-08-29 19:21:24.619666 | orchestrator | Friday 29 August 2025 19:21:17 +0000 (0:00:01.105) 0:00:01.390 ********* 2025-08-29 19:21:24.619672 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:21:24.619680 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:21:24.619687 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:21:24.619693 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:21:24.619699 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:21:24.619705 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:21:24.619711 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:21:24.619717 | orchestrator | 2025-08-29 19:21:24.619724 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 19:21:24.619730 | orchestrator | 2025-08-29 19:21:24.619736 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 19:21:24.619742 | orchestrator | Friday 29 August 2025 19:21:19 +0000 (0:00:01.258) 0:00:02.648 ********* 2025-08-29 19:21:24.619749 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:21:24.619755 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:21:24.619761 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:21:24.619767 | orchestrator | ok: [testbed-manager] 2025-08-29 19:21:24.619773 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:21:24.619779 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:21:24.619785 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:21:24.619791 | orchestrator | 2025-08-29 19:21:24.619797 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-08-29 19:21:24.619803 | orchestrator | 2025-08-29 19:21:24.619809 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-08-29 19:21:24.619816 | orchestrator | Friday 29 August 2025 19:21:23 +0000 (0:00:04.591) 0:00:07.240 ********* 2025-08-29 19:21:24.619822 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:21:24.619828 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:21:24.619834 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:21:24.619840 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:21:24.619846 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:21:24.619852 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:21:24.619858 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:21:24.619864 | orchestrator | 2025-08-29 19:21:24.619870 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:21:24.619877 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 19:21:24.619884 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 19:21:24.619891 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 19:21:24.619897 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 19:21:24.619903 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 19:21:24.619909 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 19:21:24.619915 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 19:21:24.619926 | orchestrator | 2025-08-29 19:21:24.619933 | orchestrator | 2025-08-29 19:21:24.619940 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:21:24.619947 | orchestrator | Friday 29 August 2025 19:21:24 +0000 (0:00:00.535) 0:00:07.775 ********* 2025-08-29 19:21:24.619954 | orchestrator | =============================================================================== 2025-08-29 19:21:24.619961 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.59s 2025-08-29 19:21:24.619968 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.26s 2025-08-29 19:21:24.619975 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.11s 2025-08-29 19:21:24.619982 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2025-08-29 19:21:36.913810 | orchestrator | 2025-08-29 19:21:36 | INFO  | Task 7453f713-c099-4957-bde2-128e564e6f59 (frr) was prepared for execution. 2025-08-29 19:21:36.913928 | orchestrator | 2025-08-29 19:21:36 | INFO  | It takes a moment until task 7453f713-c099-4957-bde2-128e564e6f59 (frr) has been started and output is visible here. 2025-08-29 19:22:03.224116 | orchestrator | 2025-08-29 19:22:03.224218 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-08-29 19:22:03.224230 | orchestrator | 2025-08-29 19:22:03.224239 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-08-29 19:22:03.224248 | orchestrator | Friday 29 August 2025 19:21:41 +0000 (0:00:00.249) 0:00:00.249 ********* 2025-08-29 19:22:03.224272 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 19:22:03.224281 | orchestrator | 2025-08-29 19:22:03.224289 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-08-29 19:22:03.224296 | orchestrator | Friday 29 August 2025 19:21:41 +0000 (0:00:00.229) 0:00:00.478 ********* 2025-08-29 19:22:03.224304 | orchestrator | changed: [testbed-manager] 2025-08-29 19:22:03.224311 | orchestrator | 2025-08-29 19:22:03.224319 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-08-29 19:22:03.224326 | orchestrator | Friday 29 August 2025 19:21:42 +0000 (0:00:01.125) 0:00:01.604 ********* 2025-08-29 19:22:03.224334 | orchestrator | changed: [testbed-manager] 2025-08-29 19:22:03.224341 | orchestrator | 2025-08-29 19:22:03.224351 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-08-29 19:22:03.224359 | orchestrator | Friday 29 August 2025 19:21:52 +0000 (0:00:10.081) 0:00:11.685 ********* 2025-08-29 19:22:03.224366 | orchestrator | ok: [testbed-manager] 2025-08-29 19:22:03.224374 | orchestrator | 2025-08-29 19:22:03.224381 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-08-29 19:22:03.224388 | orchestrator | Friday 29 August 2025 19:21:53 +0000 (0:00:01.335) 0:00:13.021 ********* 2025-08-29 19:22:03.224395 | orchestrator | changed: [testbed-manager] 2025-08-29 19:22:03.224443 | orchestrator | 2025-08-29 19:22:03.224452 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-08-29 19:22:03.224459 | orchestrator | Friday 29 August 2025 19:21:54 +0000 (0:00:00.998) 0:00:14.020 ********* 2025-08-29 19:22:03.224466 | orchestrator | ok: [testbed-manager] 2025-08-29 19:22:03.224473 | orchestrator | 2025-08-29 19:22:03.224480 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-08-29 19:22:03.224488 | orchestrator | Friday 29 August 2025 19:21:55 +0000 (0:00:01.204) 0:00:15.224 ********* 2025-08-29 19:22:03.224495 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 19:22:03.224502 | orchestrator | 2025-08-29 19:22:03.224510 | orchestrator | TASK [osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf] *** 2025-08-29 19:22:03.224517 | orchestrator | Friday 29 August 2025 19:21:56 +0000 (0:00:00.895) 0:00:16.120 ********* 2025-08-29 19:22:03.224524 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:22:03.224531 | orchestrator | 2025-08-29 19:22:03.224539 | orchestrator | TASK [osism.services.frr : Copy file from the role: /etc/frr/frr.conf] ********* 2025-08-29 19:22:03.224563 | orchestrator | Friday 29 August 2025 19:21:57 +0000 (0:00:00.157) 0:00:16.277 ********* 2025-08-29 19:22:03.224570 | orchestrator | changed: [testbed-manager] 2025-08-29 19:22:03.224578 | orchestrator | 2025-08-29 19:22:03.224585 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-08-29 19:22:03.224592 | orchestrator | Friday 29 August 2025 19:21:57 +0000 (0:00:00.936) 0:00:17.214 ********* 2025-08-29 19:22:03.224599 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-08-29 19:22:03.224607 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-08-29 19:22:03.224615 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-08-29 19:22:03.224623 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-08-29 19:22:03.224630 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-08-29 19:22:03.224637 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-08-29 19:22:03.224644 | orchestrator | 2025-08-29 19:22:03.224651 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-08-29 19:22:03.224659 | orchestrator | Friday 29 August 2025 19:22:00 +0000 (0:00:02.190) 0:00:19.404 ********* 2025-08-29 19:22:03.224666 | orchestrator | ok: [testbed-manager] 2025-08-29 19:22:03.224673 | orchestrator | 2025-08-29 19:22:03.224681 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-08-29 19:22:03.224690 | orchestrator | Friday 29 August 2025 19:22:01 +0000 (0:00:01.391) 0:00:20.796 ********* 2025-08-29 19:22:03.224698 | orchestrator | changed: [testbed-manager] 2025-08-29 19:22:03.224707 | orchestrator | 2025-08-29 19:22:03.224715 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:22:03.224724 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 19:22:03.224732 | orchestrator | 2025-08-29 19:22:03.224741 | orchestrator | 2025-08-29 19:22:03.224749 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:22:03.224757 | orchestrator | Friday 29 August 2025 19:22:02 +0000 (0:00:01.401) 0:00:22.198 ********* 2025-08-29 19:22:03.224765 | orchestrator | =============================================================================== 2025-08-29 19:22:03.224774 | orchestrator | osism.services.frr : Install frr package ------------------------------- 10.08s 2025-08-29 19:22:03.224782 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.19s 2025-08-29 19:22:03.224791 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.40s 2025-08-29 19:22:03.224799 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.39s 2025-08-29 19:22:03.224821 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.34s 2025-08-29 19:22:03.224830 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.20s 2025-08-29 19:22:03.224838 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.13s 2025-08-29 19:22:03.224846 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.00s 2025-08-29 19:22:03.224854 | orchestrator | osism.services.frr : Copy file from the role: /etc/frr/frr.conf --------- 0.94s 2025-08-29 19:22:03.224863 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.90s 2025-08-29 19:22:03.224871 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.23s 2025-08-29 19:22:03.224880 | orchestrator | osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf --- 0.16s 2025-08-29 19:22:03.532096 | orchestrator | 2025-08-29 19:22:03.536735 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Fri Aug 29 19:22:03 UTC 2025 2025-08-29 19:22:03.536854 | orchestrator | 2025-08-29 19:22:05.503579 | orchestrator | 2025-08-29 19:22:05 | INFO  | Collection nutshell is prepared for execution 2025-08-29 19:22:05.503691 | orchestrator | 2025-08-29 19:22:05 | INFO  | D [0] - dotfiles 2025-08-29 19:22:15.612158 | orchestrator | 2025-08-29 19:22:15 | INFO  | D [0] - homer 2025-08-29 19:22:15.612271 | orchestrator | 2025-08-29 19:22:15 | INFO  | D [0] - netdata 2025-08-29 19:22:15.612286 | orchestrator | 2025-08-29 19:22:15 | INFO  | D [0] - openstackclient 2025-08-29 19:22:15.612298 | orchestrator | 2025-08-29 19:22:15 | INFO  | D [0] - phpmyadmin 2025-08-29 19:22:15.612309 | orchestrator | 2025-08-29 19:22:15 | INFO  | A [0] - common 2025-08-29 19:22:15.614722 | orchestrator | 2025-08-29 19:22:15 | INFO  | A [1] -- loadbalancer 2025-08-29 19:22:15.614831 | orchestrator | 2025-08-29 19:22:15 | INFO  | D [2] --- opensearch 2025-08-29 19:22:15.614914 | orchestrator | 2025-08-29 19:22:15 | INFO  | A [2] --- mariadb-ng 2025-08-29 19:22:15.614929 | orchestrator | 2025-08-29 19:22:15 | INFO  | D [3] ---- horizon 2025-08-29 19:22:15.614940 | orchestrator | 2025-08-29 19:22:15 | INFO  | A [3] ---- keystone 2025-08-29 19:22:15.615029 | orchestrator | 2025-08-29 19:22:15 | INFO  | A [4] ----- neutron 2025-08-29 19:22:15.615042 | orchestrator | 2025-08-29 19:22:15 | INFO  | D [5] ------ wait-for-nova 2025-08-29 19:22:15.615053 | orchestrator | 2025-08-29 19:22:15 | INFO  | A [5] ------ octavia 2025-08-29 19:22:15.616580 | orchestrator | 2025-08-29 19:22:15 | INFO  | D [4] ----- barbican 2025-08-29 19:22:15.616626 | orchestrator | 2025-08-29 19:22:15 | INFO  | D [4] ----- designate 2025-08-29 19:22:15.616645 | orchestrator | 2025-08-29 19:22:15 | INFO  | D [4] ----- ironic 2025-08-29 19:22:15.616661 | orchestrator | 2025-08-29 19:22:15 | INFO  | D [4] ----- placement 2025-08-29 19:22:15.616713 | orchestrator | 2025-08-29 19:22:15 | INFO  | D [4] ----- magnum 2025-08-29 19:22:15.617372 | orchestrator | 2025-08-29 19:22:15 | INFO  | A [1] -- openvswitch 2025-08-29 19:22:15.617450 | orchestrator | 2025-08-29 19:22:15 | INFO  | D [2] --- ovn 2025-08-29 19:22:15.617667 | orchestrator | 2025-08-29 19:22:15 | INFO  | D [1] -- memcached 2025-08-29 19:22:15.617759 | orchestrator | 2025-08-29 19:22:15 | INFO  | D [1] -- redis 2025-08-29 19:22:15.617891 | orchestrator | 2025-08-29 19:22:15 | INFO  | D [1] -- rabbitmq-ng 2025-08-29 19:22:15.618368 | orchestrator | 2025-08-29 19:22:15 | INFO  | A [0] - kubernetes 2025-08-29 19:22:15.620644 | orchestrator | 2025-08-29 19:22:15 | INFO  | D [1] -- kubeconfig 2025-08-29 19:22:15.620672 | orchestrator | 2025-08-29 19:22:15 | INFO  | A [1] -- copy-kubeconfig 2025-08-29 19:22:15.620910 | orchestrator | 2025-08-29 19:22:15 | INFO  | A [0] - ceph 2025-08-29 19:22:15.622857 | orchestrator | 2025-08-29 19:22:15 | INFO  | A [1] -- ceph-pools 2025-08-29 19:22:15.622892 | orchestrator | 2025-08-29 19:22:15 | INFO  | A [2] --- copy-ceph-keys 2025-08-29 19:22:15.622905 | orchestrator | 2025-08-29 19:22:15 | INFO  | A [3] ---- cephclient 2025-08-29 19:22:15.623122 | orchestrator | 2025-08-29 19:22:15 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-08-29 19:22:15.623595 | orchestrator | 2025-08-29 19:22:15 | INFO  | A [4] ----- wait-for-keystone 2025-08-29 19:22:15.623617 | orchestrator | 2025-08-29 19:22:15 | INFO  | D [5] ------ kolla-ceph-rgw 2025-08-29 19:22:15.623629 | orchestrator | 2025-08-29 19:22:15 | INFO  | D [5] ------ glance 2025-08-29 19:22:15.623640 | orchestrator | 2025-08-29 19:22:15 | INFO  | D [5] ------ cinder 2025-08-29 19:22:15.623895 | orchestrator | 2025-08-29 19:22:15 | INFO  | D [5] ------ nova 2025-08-29 19:22:15.623939 | orchestrator | 2025-08-29 19:22:15 | INFO  | A [4] ----- prometheus 2025-08-29 19:22:15.623955 | orchestrator | 2025-08-29 19:22:15 | INFO  | D [5] ------ grafana 2025-08-29 19:22:15.810919 | orchestrator | 2025-08-29 19:22:15 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-08-29 19:22:15.811022 | orchestrator | 2025-08-29 19:22:15 | INFO  | Tasks are running in the background 2025-08-29 19:22:19.035995 | orchestrator | 2025-08-29 19:22:19 | INFO  | No task IDs specified, wait for all currently running tasks 2025-08-29 19:22:21.145236 | orchestrator | 2025-08-29 19:22:21 | INFO  | Task d03e6950-e637-44f3-b2e2-c033355d6203 is in state STARTED 2025-08-29 19:22:21.145345 | orchestrator | 2025-08-29 19:22:21 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:22:21.145881 | orchestrator | 2025-08-29 19:22:21 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:22:21.146349 | orchestrator | 2025-08-29 19:22:21 | INFO  | Task 6478122e-00c9-4db0-aa08-594db15e0275 is in state STARTED 2025-08-29 19:22:21.147114 | orchestrator | 2025-08-29 19:22:21 | INFO  | Task 5fecc215-339a-4e59-b18c-0523dddd999a is in state STARTED 2025-08-29 19:22:21.147465 | orchestrator | 2025-08-29 19:22:21 | INFO  | Task 31accb88-4c8e-49b3-a352-28acfcb73bdf is in state STARTED 2025-08-29 19:22:21.151069 | orchestrator | 2025-08-29 19:22:21 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:22:21.151095 | orchestrator | 2025-08-29 19:22:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:22:24.196639 | orchestrator | 2025-08-29 19:22:24 | INFO  | Task d03e6950-e637-44f3-b2e2-c033355d6203 is in state STARTED 2025-08-29 19:22:24.197722 | orchestrator | 2025-08-29 19:22:24 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:22:24.198157 | orchestrator | 2025-08-29 19:22:24 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:22:24.204145 | orchestrator | 2025-08-29 19:22:24 | INFO  | Task 6478122e-00c9-4db0-aa08-594db15e0275 is in state STARTED 2025-08-29 19:22:24.204364 | orchestrator | 2025-08-29 19:22:24 | INFO  | Task 5fecc215-339a-4e59-b18c-0523dddd999a is in state STARTED 2025-08-29 19:22:24.204803 | orchestrator | 2025-08-29 19:22:24 | INFO  | Task 31accb88-4c8e-49b3-a352-28acfcb73bdf is in state STARTED 2025-08-29 19:22:24.205295 | orchestrator | 2025-08-29 19:22:24 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:22:24.205329 | orchestrator | 2025-08-29 19:22:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:22:27.232576 | orchestrator | 2025-08-29 19:22:27 | INFO  | Task d03e6950-e637-44f3-b2e2-c033355d6203 is in state STARTED 2025-08-29 19:22:27.232808 | orchestrator | 2025-08-29 19:22:27 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:22:27.232843 | orchestrator | 2025-08-29 19:22:27 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:22:27.233157 | orchestrator | 2025-08-29 19:22:27 | INFO  | Task 6478122e-00c9-4db0-aa08-594db15e0275 is in state STARTED 2025-08-29 19:22:27.233625 | orchestrator | 2025-08-29 19:22:27 | INFO  | Task 5fecc215-339a-4e59-b18c-0523dddd999a is in state STARTED 2025-08-29 19:22:27.234084 | orchestrator | 2025-08-29 19:22:27 | INFO  | Task 31accb88-4c8e-49b3-a352-28acfcb73bdf is in state STARTED 2025-08-29 19:22:27.234586 | orchestrator | 2025-08-29 19:22:27 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:22:27.234609 | orchestrator | 2025-08-29 19:22:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:22:30.355933 | orchestrator | 2025-08-29 19:22:30 | INFO  | Task d03e6950-e637-44f3-b2e2-c033355d6203 is in state STARTED 2025-08-29 19:22:30.358349 | orchestrator | 2025-08-29 19:22:30 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:22:30.361046 | orchestrator | 2025-08-29 19:22:30 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:22:30.366490 | orchestrator | 2025-08-29 19:22:30 | INFO  | Task 6478122e-00c9-4db0-aa08-594db15e0275 is in state STARTED 2025-08-29 19:22:30.367264 | orchestrator | 2025-08-29 19:22:30 | INFO  | Task 5fecc215-339a-4e59-b18c-0523dddd999a is in state STARTED 2025-08-29 19:22:30.369507 | orchestrator | 2025-08-29 19:22:30 | INFO  | Task 31accb88-4c8e-49b3-a352-28acfcb73bdf is in state STARTED 2025-08-29 19:22:30.372014 | orchestrator | 2025-08-29 19:22:30 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:22:30.372057 | orchestrator | 2025-08-29 19:22:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:22:33.485947 | orchestrator | 2025-08-29 19:22:33 | INFO  | Task d03e6950-e637-44f3-b2e2-c033355d6203 is in state STARTED 2025-08-29 19:22:33.486113 | orchestrator | 2025-08-29 19:22:33 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:22:33.486131 | orchestrator | 2025-08-29 19:22:33 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:22:33.486144 | orchestrator | 2025-08-29 19:22:33 | INFO  | Task 6478122e-00c9-4db0-aa08-594db15e0275 is in state STARTED 2025-08-29 19:22:33.486155 | orchestrator | 2025-08-29 19:22:33 | INFO  | Task 5fecc215-339a-4e59-b18c-0523dddd999a is in state STARTED 2025-08-29 19:22:33.486167 | orchestrator | 2025-08-29 19:22:33 | INFO  | Task 31accb88-4c8e-49b3-a352-28acfcb73bdf is in state STARTED 2025-08-29 19:22:33.486177 | orchestrator | 2025-08-29 19:22:33 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:22:33.486188 | orchestrator | 2025-08-29 19:22:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:22:36.506874 | orchestrator | 2025-08-29 19:22:36 | INFO  | Task d03e6950-e637-44f3-b2e2-c033355d6203 is in state STARTED 2025-08-29 19:22:36.507553 | orchestrator | 2025-08-29 19:22:36 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:22:36.510997 | orchestrator | 2025-08-29 19:22:36 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:22:36.514711 | orchestrator | 2025-08-29 19:22:36 | INFO  | Task 6478122e-00c9-4db0-aa08-594db15e0275 is in state STARTED 2025-08-29 19:22:36.553534 | orchestrator | 2025-08-29 19:22:36 | INFO  | Task 5fecc215-339a-4e59-b18c-0523dddd999a is in state STARTED 2025-08-29 19:22:36.555979 | orchestrator | 2025-08-29 19:22:36 | INFO  | Task 31accb88-4c8e-49b3-a352-28acfcb73bdf is in state STARTED 2025-08-29 19:22:36.600844 | orchestrator | 2025-08-29 19:22:36 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:22:36.600928 | orchestrator | 2025-08-29 19:22:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:22:39.656436 | orchestrator | 2025-08-29 19:22:39 | INFO  | Task d03e6950-e637-44f3-b2e2-c033355d6203 is in state STARTED 2025-08-29 19:22:39.656869 | orchestrator | 2025-08-29 19:22:39 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:22:39.657510 | orchestrator | 2025-08-29 19:22:39 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:22:39.658279 | orchestrator | 2025-08-29 19:22:39 | INFO  | Task 6478122e-00c9-4db0-aa08-594db15e0275 is in state STARTED 2025-08-29 19:22:39.659349 | orchestrator | 2025-08-29 19:22:39 | INFO  | Task 5fecc215-339a-4e59-b18c-0523dddd999a is in state STARTED 2025-08-29 19:22:39.660648 | orchestrator | 2025-08-29 19:22:39 | INFO  | Task 31accb88-4c8e-49b3-a352-28acfcb73bdf is in state STARTED 2025-08-29 19:22:39.661917 | orchestrator | 2025-08-29 19:22:39 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:22:39.661944 | orchestrator | 2025-08-29 19:22:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:22:42.789744 | orchestrator | 2025-08-29 19:22:42.789853 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-08-29 19:22:42.789868 | orchestrator | 2025-08-29 19:22:42.789880 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-08-29 19:22:42.789892 | orchestrator | Friday 29 August 2025 19:22:27 +0000 (0:00:00.397) 0:00:00.397 ********* 2025-08-29 19:22:42.789904 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:22:42.789916 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:22:42.789927 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:22:42.789938 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:22:42.789949 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:22:42.789960 | orchestrator | changed: [testbed-manager] 2025-08-29 19:22:42.789971 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:22:42.789982 | orchestrator | 2025-08-29 19:22:42.789993 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-08-29 19:22:42.790004 | orchestrator | Friday 29 August 2025 19:22:31 +0000 (0:00:03.537) 0:00:03.934 ********* 2025-08-29 19:22:42.790067 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-08-29 19:22:42.790080 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-08-29 19:22:42.790092 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-08-29 19:22:42.790102 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-08-29 19:22:42.790113 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-08-29 19:22:42.790124 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-08-29 19:22:42.790135 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-08-29 19:22:42.790146 | orchestrator | 2025-08-29 19:22:42.790158 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-08-29 19:22:42.790169 | orchestrator | Friday 29 August 2025 19:22:33 +0000 (0:00:02.539) 0:00:06.474 ********* 2025-08-29 19:22:42.790194 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 19:22:32.403801', 'end': '2025-08-29 19:22:32.411925', 'delta': '0:00:00.008124', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 19:22:42.790211 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 19:22:32.118488', 'end': '2025-08-29 19:22:32.128648', 'delta': '0:00:00.010160', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 19:22:42.790245 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 19:22:32.142605', 'end': '2025-08-29 19:22:32.152040', 'delta': '0:00:00.009435', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 19:22:42.790289 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 19:22:32.806193', 'end': '2025-08-29 19:22:32.813525', 'delta': '0:00:00.007332', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 19:22:42.790304 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 19:22:32.061491', 'end': '2025-08-29 19:22:32.066464', 'delta': '0:00:00.004973', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 19:22:42.790636 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 19:22:33.253024', 'end': '2025-08-29 19:22:33.261363', 'delta': '0:00:00.008339', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 19:22:42.790653 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 19:22:33.680156', 'end': '2025-08-29 19:22:33.685805', 'delta': '0:00:00.005649', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 19:22:42.790679 | orchestrator | 2025-08-29 19:22:42.790691 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-08-29 19:22:42.790703 | orchestrator | Friday 29 August 2025 19:22:34 +0000 (0:00:01.168) 0:00:07.642 ********* 2025-08-29 19:22:42.790713 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-08-29 19:22:42.790724 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-08-29 19:22:42.790735 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-08-29 19:22:42.790746 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-08-29 19:22:42.790757 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-08-29 19:22:42.790768 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-08-29 19:22:42.790779 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-08-29 19:22:42.790789 | orchestrator | 2025-08-29 19:22:42.790800 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-08-29 19:22:42.790811 | orchestrator | Friday 29 August 2025 19:22:37 +0000 (0:00:02.042) 0:00:09.685 ********* 2025-08-29 19:22:42.790827 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-08-29 19:22:42.790839 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-08-29 19:22:42.790849 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-08-29 19:22:42.790860 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-08-29 19:22:42.790871 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-08-29 19:22:42.790882 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-08-29 19:22:42.790893 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-08-29 19:22:42.790903 | orchestrator | 2025-08-29 19:22:42.790914 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:22:42.790935 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:22:42.790948 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:22:42.790959 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:22:42.790970 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:22:42.790981 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:22:42.790992 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:22:42.791002 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:22:42.791013 | orchestrator | 2025-08-29 19:22:42.791024 | orchestrator | 2025-08-29 19:22:42.791035 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:22:42.791046 | orchestrator | Friday 29 August 2025 19:22:39 +0000 (0:00:02.747) 0:00:12.432 ********* 2025-08-29 19:22:42.791057 | orchestrator | =============================================================================== 2025-08-29 19:22:42.791068 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.54s 2025-08-29 19:22:42.791079 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.75s 2025-08-29 19:22:42.791096 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.54s 2025-08-29 19:22:42.791108 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.04s 2025-08-29 19:22:42.791118 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.17s 2025-08-29 19:22:42.791130 | orchestrator | 2025-08-29 19:22:42 | INFO  | Task d03e6950-e637-44f3-b2e2-c033355d6203 is in state SUCCESS 2025-08-29 19:22:42.791141 | orchestrator | 2025-08-29 19:22:42 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:22:42.791152 | orchestrator | 2025-08-29 19:22:42 | INFO  | Task 788a2043-30de-4c09-9f2f-9f3f14bd66ea is in state STARTED 2025-08-29 19:22:42.791163 | orchestrator | 2025-08-29 19:22:42 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:22:42.791174 | orchestrator | 2025-08-29 19:22:42 | INFO  | Task 6478122e-00c9-4db0-aa08-594db15e0275 is in state STARTED 2025-08-29 19:22:42.791185 | orchestrator | 2025-08-29 19:22:42 | INFO  | Task 5fecc215-339a-4e59-b18c-0523dddd999a is in state STARTED 2025-08-29 19:22:42.791196 | orchestrator | 2025-08-29 19:22:42 | INFO  | Task 31accb88-4c8e-49b3-a352-28acfcb73bdf is in state STARTED 2025-08-29 19:22:42.795510 | orchestrator | 2025-08-29 19:22:42 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:22:42.795542 | orchestrator | 2025-08-29 19:22:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:22:45.873740 | orchestrator | 2025-08-29 19:22:45 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:22:45.873858 | orchestrator | 2025-08-29 19:22:45 | INFO  | Task 788a2043-30de-4c09-9f2f-9f3f14bd66ea is in state STARTED 2025-08-29 19:22:45.873879 | orchestrator | 2025-08-29 19:22:45 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:22:45.873894 | orchestrator | 2025-08-29 19:22:45 | INFO  | Task 6478122e-00c9-4db0-aa08-594db15e0275 is in state STARTED 2025-08-29 19:22:45.873908 | orchestrator | 2025-08-29 19:22:45 | INFO  | Task 5fecc215-339a-4e59-b18c-0523dddd999a is in state STARTED 2025-08-29 19:22:45.873922 | orchestrator | 2025-08-29 19:22:45 | INFO  | Task 31accb88-4c8e-49b3-a352-28acfcb73bdf is in state STARTED 2025-08-29 19:22:45.873958 | orchestrator | 2025-08-29 19:22:45 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:22:45.873973 | orchestrator | 2025-08-29 19:22:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:22:48.879243 | orchestrator | 2025-08-29 19:22:48 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:22:48.879952 | orchestrator | 2025-08-29 19:22:48 | INFO  | Task 788a2043-30de-4c09-9f2f-9f3f14bd66ea is in state STARTED 2025-08-29 19:22:48.882216 | orchestrator | 2025-08-29 19:22:48 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:22:48.884666 | orchestrator | 2025-08-29 19:22:48 | INFO  | Task 6478122e-00c9-4db0-aa08-594db15e0275 is in state STARTED 2025-08-29 19:22:48.884730 | orchestrator | 2025-08-29 19:22:48 | INFO  | Task 5fecc215-339a-4e59-b18c-0523dddd999a is in state STARTED 2025-08-29 19:22:48.884752 | orchestrator | 2025-08-29 19:22:48 | INFO  | Task 31accb88-4c8e-49b3-a352-28acfcb73bdf is in state STARTED 2025-08-29 19:22:48.884772 | orchestrator | 2025-08-29 19:22:48 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:22:48.884790 | orchestrator | 2025-08-29 19:22:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:22:51.926132 | orchestrator | 2025-08-29 19:22:51 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:22:51.929607 | orchestrator | 2025-08-29 19:22:51 | INFO  | Task 788a2043-30de-4c09-9f2f-9f3f14bd66ea is in state STARTED 2025-08-29 19:22:51.932648 | orchestrator | 2025-08-29 19:22:51 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:22:51.936641 | orchestrator | 2025-08-29 19:22:51 | INFO  | Task 6478122e-00c9-4db0-aa08-594db15e0275 is in state STARTED 2025-08-29 19:22:51.940561 | orchestrator | 2025-08-29 19:22:51 | INFO  | Task 5fecc215-339a-4e59-b18c-0523dddd999a is in state STARTED 2025-08-29 19:22:51.944561 | orchestrator | 2025-08-29 19:22:51 | INFO  | Task 31accb88-4c8e-49b3-a352-28acfcb73bdf is in state STARTED 2025-08-29 19:22:51.944725 | orchestrator | 2025-08-29 19:22:51 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:22:51.945031 | orchestrator | 2025-08-29 19:22:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:22:55.017889 | orchestrator | 2025-08-29 19:22:55 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:22:55.018199 | orchestrator | 2025-08-29 19:22:55 | INFO  | Task 788a2043-30de-4c09-9f2f-9f3f14bd66ea is in state STARTED 2025-08-29 19:22:55.018880 | orchestrator | 2025-08-29 19:22:55 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:22:55.020116 | orchestrator | 2025-08-29 19:22:55 | INFO  | Task 6478122e-00c9-4db0-aa08-594db15e0275 is in state STARTED 2025-08-29 19:22:55.022500 | orchestrator | 2025-08-29 19:22:55 | INFO  | Task 5fecc215-339a-4e59-b18c-0523dddd999a is in state STARTED 2025-08-29 19:22:55.023186 | orchestrator | 2025-08-29 19:22:55 | INFO  | Task 31accb88-4c8e-49b3-a352-28acfcb73bdf is in state STARTED 2025-08-29 19:22:55.024156 | orchestrator | 2025-08-29 19:22:55 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:22:55.024187 | orchestrator | 2025-08-29 19:22:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:22:58.056705 | orchestrator | 2025-08-29 19:22:58 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:22:58.058417 | orchestrator | 2025-08-29 19:22:58 | INFO  | Task 788a2043-30de-4c09-9f2f-9f3f14bd66ea is in state STARTED 2025-08-29 19:22:58.058985 | orchestrator | 2025-08-29 19:22:58 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:22:58.060179 | orchestrator | 2025-08-29 19:22:58 | INFO  | Task 6478122e-00c9-4db0-aa08-594db15e0275 is in state STARTED 2025-08-29 19:22:58.060209 | orchestrator | 2025-08-29 19:22:58 | INFO  | Task 5fecc215-339a-4e59-b18c-0523dddd999a is in state STARTED 2025-08-29 19:22:58.062109 | orchestrator | 2025-08-29 19:22:58 | INFO  | Task 31accb88-4c8e-49b3-a352-28acfcb73bdf is in state STARTED 2025-08-29 19:22:58.062736 | orchestrator | 2025-08-29 19:22:58 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:22:58.062765 | orchestrator | 2025-08-29 19:22:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:23:01.128557 | orchestrator | 2025-08-29 19:23:01 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:23:01.131311 | orchestrator | 2025-08-29 19:23:01 | INFO  | Task 788a2043-30de-4c09-9f2f-9f3f14bd66ea is in state STARTED 2025-08-29 19:23:01.132603 | orchestrator | 2025-08-29 19:23:01 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:23:01.133916 | orchestrator | 2025-08-29 19:23:01 | INFO  | Task 6478122e-00c9-4db0-aa08-594db15e0275 is in state STARTED 2025-08-29 19:23:01.134840 | orchestrator | 2025-08-29 19:23:01 | INFO  | Task 5fecc215-339a-4e59-b18c-0523dddd999a is in state STARTED 2025-08-29 19:23:01.139155 | orchestrator | 2025-08-29 19:23:01 | INFO  | Task 31accb88-4c8e-49b3-a352-28acfcb73bdf is in state STARTED 2025-08-29 19:23:01.141001 | orchestrator | 2025-08-29 19:23:01 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:23:01.141034 | orchestrator | 2025-08-29 19:23:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:23:04.214183 | orchestrator | 2025-08-29 19:23:04 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:23:04.214284 | orchestrator | 2025-08-29 19:23:04 | INFO  | Task 788a2043-30de-4c09-9f2f-9f3f14bd66ea is in state STARTED 2025-08-29 19:23:04.214300 | orchestrator | 2025-08-29 19:23:04 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:23:04.214313 | orchestrator | 2025-08-29 19:23:04 | INFO  | Task 6478122e-00c9-4db0-aa08-594db15e0275 is in state STARTED 2025-08-29 19:23:04.214324 | orchestrator | 2025-08-29 19:23:04 | INFO  | Task 5fecc215-339a-4e59-b18c-0523dddd999a is in state STARTED 2025-08-29 19:23:04.214335 | orchestrator | 2025-08-29 19:23:04 | INFO  | Task 31accb88-4c8e-49b3-a352-28acfcb73bdf is in state STARTED 2025-08-29 19:23:04.214346 | orchestrator | 2025-08-29 19:23:04 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:23:04.214357 | orchestrator | 2025-08-29 19:23:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:23:07.263015 | orchestrator | 2025-08-29 19:23:07 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:23:07.263114 | orchestrator | 2025-08-29 19:23:07 | INFO  | Task 788a2043-30de-4c09-9f2f-9f3f14bd66ea is in state STARTED 2025-08-29 19:23:07.263129 | orchestrator | 2025-08-29 19:23:07 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:23:07.263141 | orchestrator | 2025-08-29 19:23:07 | INFO  | Task 6478122e-00c9-4db0-aa08-594db15e0275 is in state STARTED 2025-08-29 19:23:07.263152 | orchestrator | 2025-08-29 19:23:07 | INFO  | Task 5fecc215-339a-4e59-b18c-0523dddd999a is in state STARTED 2025-08-29 19:23:07.263163 | orchestrator | 2025-08-29 19:23:07 | INFO  | Task 31accb88-4c8e-49b3-a352-28acfcb73bdf is in state STARTED 2025-08-29 19:23:07.263174 | orchestrator | 2025-08-29 19:23:07 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:23:07.263186 | orchestrator | 2025-08-29 19:23:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:23:10.317795 | orchestrator | 2025-08-29 19:23:10 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:23:10.317879 | orchestrator | 2025-08-29 19:23:10 | INFO  | Task 788a2043-30de-4c09-9f2f-9f3f14bd66ea is in state STARTED 2025-08-29 19:23:10.317888 | orchestrator | 2025-08-29 19:23:10 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:23:10.317894 | orchestrator | 2025-08-29 19:23:10 | INFO  | Task 6478122e-00c9-4db0-aa08-594db15e0275 is in state STARTED 2025-08-29 19:23:10.319606 | orchestrator | 2025-08-29 19:23:10 | INFO  | Task 5fecc215-339a-4e59-b18c-0523dddd999a is in state STARTED 2025-08-29 19:23:10.319631 | orchestrator | 2025-08-29 19:23:10 | INFO  | Task 31accb88-4c8e-49b3-a352-28acfcb73bdf is in state SUCCESS 2025-08-29 19:23:10.319957 | orchestrator | 2025-08-29 19:23:10 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:23:10.319972 | orchestrator | 2025-08-29 19:23:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:23:13.386309 | orchestrator | 2025-08-29 19:23:13 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:23:13.386449 | orchestrator | 2025-08-29 19:23:13 | INFO  | Task 788a2043-30de-4c09-9f2f-9f3f14bd66ea is in state STARTED 2025-08-29 19:23:13.387089 | orchestrator | 2025-08-29 19:23:13 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:23:13.388181 | orchestrator | 2025-08-29 19:23:13 | INFO  | Task 6478122e-00c9-4db0-aa08-594db15e0275 is in state STARTED 2025-08-29 19:23:13.388378 | orchestrator | 2025-08-29 19:23:13 | INFO  | Task 5fecc215-339a-4e59-b18c-0523dddd999a is in state SUCCESS 2025-08-29 19:23:13.389835 | orchestrator | 2025-08-29 19:23:13 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:23:13.389871 | orchestrator | 2025-08-29 19:23:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:23:16.467519 | orchestrator | 2025-08-29 19:23:16 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:23:16.468298 | orchestrator | 2025-08-29 19:23:16 | INFO  | Task 788a2043-30de-4c09-9f2f-9f3f14bd66ea is in state STARTED 2025-08-29 19:23:16.469245 | orchestrator | 2025-08-29 19:23:16 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:23:16.471463 | orchestrator | 2025-08-29 19:23:16 | INFO  | Task 6478122e-00c9-4db0-aa08-594db15e0275 is in state STARTED 2025-08-29 19:23:16.473390 | orchestrator | 2025-08-29 19:23:16 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:23:16.473422 | orchestrator | 2025-08-29 19:23:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:23:19.542382 | orchestrator | 2025-08-29 19:23:19 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:23:19.546096 | orchestrator | 2025-08-29 19:23:19 | INFO  | Task 788a2043-30de-4c09-9f2f-9f3f14bd66ea is in state STARTED 2025-08-29 19:23:19.551094 | orchestrator | 2025-08-29 19:23:19 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:23:19.555960 | orchestrator | 2025-08-29 19:23:19 | INFO  | Task 6478122e-00c9-4db0-aa08-594db15e0275 is in state STARTED 2025-08-29 19:23:19.558007 | orchestrator | 2025-08-29 19:23:19 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:23:19.558146 | orchestrator | 2025-08-29 19:23:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:23:22.607616 | orchestrator | 2025-08-29 19:23:22 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:23:22.610216 | orchestrator | 2025-08-29 19:23:22 | INFO  | Task 788a2043-30de-4c09-9f2f-9f3f14bd66ea is in state STARTED 2025-08-29 19:23:22.611460 | orchestrator | 2025-08-29 19:23:22 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:23:22.613643 | orchestrator | 2025-08-29 19:23:22 | INFO  | Task 6478122e-00c9-4db0-aa08-594db15e0275 is in state STARTED 2025-08-29 19:23:22.615776 | orchestrator | 2025-08-29 19:23:22 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:23:22.615826 | orchestrator | 2025-08-29 19:23:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:23:25.667331 | orchestrator | 2025-08-29 19:23:25 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:23:25.669880 | orchestrator | 2025-08-29 19:23:25 | INFO  | Task 788a2043-30de-4c09-9f2f-9f3f14bd66ea is in state STARTED 2025-08-29 19:23:25.670976 | orchestrator | 2025-08-29 19:23:25 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:23:25.673031 | orchestrator | 2025-08-29 19:23:25 | INFO  | Task 6478122e-00c9-4db0-aa08-594db15e0275 is in state STARTED 2025-08-29 19:23:25.675047 | orchestrator | 2025-08-29 19:23:25 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:23:25.675730 | orchestrator | 2025-08-29 19:23:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:23:28.730813 | orchestrator | 2025-08-29 19:23:28 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:23:28.733973 | orchestrator | 2025-08-29 19:23:28 | INFO  | Task 788a2043-30de-4c09-9f2f-9f3f14bd66ea is in state STARTED 2025-08-29 19:23:28.736974 | orchestrator | 2025-08-29 19:23:28 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:23:28.741104 | orchestrator | 2025-08-29 19:23:28 | INFO  | Task 6478122e-00c9-4db0-aa08-594db15e0275 is in state STARTED 2025-08-29 19:23:28.745286 | orchestrator | 2025-08-29 19:23:28 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:23:28.747158 | orchestrator | 2025-08-29 19:23:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:23:31.809926 | orchestrator | 2025-08-29 19:23:31 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:23:31.810944 | orchestrator | 2025-08-29 19:23:31 | INFO  | Task 788a2043-30de-4c09-9f2f-9f3f14bd66ea is in state STARTED 2025-08-29 19:23:31.812149 | orchestrator | 2025-08-29 19:23:31 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:23:31.819273 | orchestrator | 2025-08-29 19:23:31 | INFO  | Task 6478122e-00c9-4db0-aa08-594db15e0275 is in state STARTED 2025-08-29 19:23:31.828166 | orchestrator | 2025-08-29 19:23:31 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:23:31.828618 | orchestrator | 2025-08-29 19:23:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:23:34.884811 | orchestrator | 2025-08-29 19:23:34 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:23:34.884921 | orchestrator | 2025-08-29 19:23:34 | INFO  | Task 788a2043-30de-4c09-9f2f-9f3f14bd66ea is in state STARTED 2025-08-29 19:23:34.884942 | orchestrator | 2025-08-29 19:23:34 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:23:34.884958 | orchestrator | 2025-08-29 19:23:34 | INFO  | Task 6478122e-00c9-4db0-aa08-594db15e0275 is in state STARTED 2025-08-29 19:23:34.884974 | orchestrator | 2025-08-29 19:23:34 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:23:34.884988 | orchestrator | 2025-08-29 19:23:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:23:37.923774 | orchestrator | 2025-08-29 19:23:37 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:23:37.926279 | orchestrator | 2025-08-29 19:23:37 | INFO  | Task 788a2043-30de-4c09-9f2f-9f3f14bd66ea is in state STARTED 2025-08-29 19:23:37.931286 | orchestrator | 2025-08-29 19:23:37 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:23:37.934184 | orchestrator | 2025-08-29 19:23:37 | INFO  | Task 6478122e-00c9-4db0-aa08-594db15e0275 is in state STARTED 2025-08-29 19:23:37.935764 | orchestrator | 2025-08-29 19:23:37 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:23:37.935887 | orchestrator | 2025-08-29 19:23:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:23:41.012685 | orchestrator | 2025-08-29 19:23:41 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:23:41.013218 | orchestrator | 2025-08-29 19:23:41 | INFO  | Task 788a2043-30de-4c09-9f2f-9f3f14bd66ea is in state STARTED 2025-08-29 19:23:41.016008 | orchestrator | 2025-08-29 19:23:41 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:23:41.017075 | orchestrator | 2025-08-29 19:23:41 | INFO  | Task 6478122e-00c9-4db0-aa08-594db15e0275 is in state STARTED 2025-08-29 19:23:41.018308 | orchestrator | 2025-08-29 19:23:41 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:23:41.019094 | orchestrator | 2025-08-29 19:23:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:23:44.057203 | orchestrator | 2025-08-29 19:23:44 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:23:44.058387 | orchestrator | 2025-08-29 19:23:44 | INFO  | Task 788a2043-30de-4c09-9f2f-9f3f14bd66ea is in state STARTED 2025-08-29 19:23:44.062724 | orchestrator | 2025-08-29 19:23:44 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:23:44.063210 | orchestrator | 2025-08-29 19:23:44 | INFO  | Task 6478122e-00c9-4db0-aa08-594db15e0275 is in state STARTED 2025-08-29 19:23:44.064238 | orchestrator | 2025-08-29 19:23:44 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:23:44.064257 | orchestrator | 2025-08-29 19:23:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:23:47.108088 | orchestrator | 2025-08-29 19:23:47 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:23:47.108197 | orchestrator | 2025-08-29 19:23:47 | INFO  | Task 788a2043-30de-4c09-9f2f-9f3f14bd66ea is in state SUCCESS 2025-08-29 19:23:47.108808 | orchestrator | 2025-08-29 19:23:47.108842 | orchestrator | 2025-08-29 19:23:47.108857 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-08-29 19:23:47.108871 | orchestrator | 2025-08-29 19:23:47.108884 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-08-29 19:23:47.108898 | orchestrator | Friday 29 August 2025 19:22:27 +0000 (0:00:00.428) 0:00:00.428 ********* 2025-08-29 19:23:47.108911 | orchestrator | ok: [testbed-manager] => { 2025-08-29 19:23:47.108926 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-08-29 19:23:47.108941 | orchestrator | } 2025-08-29 19:23:47.108954 | orchestrator | 2025-08-29 19:23:47.108967 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-08-29 19:23:47.108980 | orchestrator | Friday 29 August 2025 19:22:27 +0000 (0:00:00.344) 0:00:00.773 ********* 2025-08-29 19:23:47.109008 | orchestrator | ok: [testbed-manager] 2025-08-29 19:23:47.109032 | orchestrator | 2025-08-29 19:23:47.109045 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-08-29 19:23:47.109058 | orchestrator | Friday 29 August 2025 19:22:30 +0000 (0:00:02.666) 0:00:03.440 ********* 2025-08-29 19:23:47.109071 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-08-29 19:23:47.109084 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-08-29 19:23:47.109097 | orchestrator | 2025-08-29 19:23:47.109110 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-08-29 19:23:47.109122 | orchestrator | Friday 29 August 2025 19:22:32 +0000 (0:00:01.666) 0:00:05.107 ********* 2025-08-29 19:23:47.109135 | orchestrator | changed: [testbed-manager] 2025-08-29 19:23:47.109148 | orchestrator | 2025-08-29 19:23:47.109192 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-08-29 19:23:47.109205 | orchestrator | Friday 29 August 2025 19:22:34 +0000 (0:00:02.617) 0:00:07.725 ********* 2025-08-29 19:23:47.109215 | orchestrator | changed: [testbed-manager] 2025-08-29 19:23:47.109227 | orchestrator | 2025-08-29 19:23:47.109296 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-08-29 19:23:47.109310 | orchestrator | Friday 29 August 2025 19:22:37 +0000 (0:00:02.622) 0:00:10.347 ********* 2025-08-29 19:23:47.109323 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-08-29 19:23:47.109359 | orchestrator | ok: [testbed-manager] 2025-08-29 19:23:47.109370 | orchestrator | 2025-08-29 19:23:47.109381 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-08-29 19:23:47.109396 | orchestrator | Friday 29 August 2025 19:23:03 +0000 (0:00:25.917) 0:00:36.265 ********* 2025-08-29 19:23:47.109410 | orchestrator | changed: [testbed-manager] 2025-08-29 19:23:47.109422 | orchestrator | 2025-08-29 19:23:47.109434 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:23:47.109447 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:23:47.109461 | orchestrator | 2025-08-29 19:23:47.109473 | orchestrator | 2025-08-29 19:23:47.109485 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:23:47.109497 | orchestrator | Friday 29 August 2025 19:23:07 +0000 (0:00:04.344) 0:00:40.609 ********* 2025-08-29 19:23:47.109509 | orchestrator | =============================================================================== 2025-08-29 19:23:47.109521 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.92s 2025-08-29 19:23:47.109606 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 4.34s 2025-08-29 19:23:47.109618 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.67s 2025-08-29 19:23:47.109631 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.62s 2025-08-29 19:23:47.109644 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.62s 2025-08-29 19:23:47.109656 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.67s 2025-08-29 19:23:47.109667 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.34s 2025-08-29 19:23:47.109675 | orchestrator | 2025-08-29 19:23:47.109682 | orchestrator | 2025-08-29 19:23:47.109689 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-08-29 19:23:47.109696 | orchestrator | 2025-08-29 19:23:47.109703 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-08-29 19:23:47.109711 | orchestrator | Friday 29 August 2025 19:22:26 +0000 (0:00:00.273) 0:00:00.273 ********* 2025-08-29 19:23:47.109718 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-08-29 19:23:47.109728 | orchestrator | 2025-08-29 19:23:47.109735 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-08-29 19:23:47.109741 | orchestrator | Friday 29 August 2025 19:22:27 +0000 (0:00:00.535) 0:00:00.809 ********* 2025-08-29 19:23:47.109748 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-08-29 19:23:47.109755 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-08-29 19:23:47.109761 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-08-29 19:23:47.109768 | orchestrator | 2025-08-29 19:23:47.109774 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-08-29 19:23:47.109781 | orchestrator | Friday 29 August 2025 19:22:29 +0000 (0:00:01.982) 0:00:02.792 ********* 2025-08-29 19:23:47.109788 | orchestrator | changed: [testbed-manager] 2025-08-29 19:23:47.109794 | orchestrator | 2025-08-29 19:23:47.109801 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-08-29 19:23:47.109807 | orchestrator | Friday 29 August 2025 19:22:31 +0000 (0:00:02.459) 0:00:05.251 ********* 2025-08-29 19:23:47.109830 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-08-29 19:23:47.109837 | orchestrator | ok: [testbed-manager] 2025-08-29 19:23:47.109844 | orchestrator | 2025-08-29 19:23:47.109851 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-08-29 19:23:47.109857 | orchestrator | Friday 29 August 2025 19:23:03 +0000 (0:00:31.682) 0:00:36.934 ********* 2025-08-29 19:23:47.109874 | orchestrator | changed: [testbed-manager] 2025-08-29 19:23:47.109881 | orchestrator | 2025-08-29 19:23:47.109887 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-08-29 19:23:47.109894 | orchestrator | Friday 29 August 2025 19:23:05 +0000 (0:00:02.105) 0:00:39.039 ********* 2025-08-29 19:23:47.109901 | orchestrator | ok: [testbed-manager] 2025-08-29 19:23:47.109907 | orchestrator | 2025-08-29 19:23:47.109914 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-08-29 19:23:47.109921 | orchestrator | Friday 29 August 2025 19:23:06 +0000 (0:00:01.324) 0:00:40.363 ********* 2025-08-29 19:23:47.109927 | orchestrator | changed: [testbed-manager] 2025-08-29 19:23:47.109934 | orchestrator | 2025-08-29 19:23:47.109941 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-08-29 19:23:47.109953 | orchestrator | Friday 29 August 2025 19:23:09 +0000 (0:00:02.656) 0:00:43.020 ********* 2025-08-29 19:23:47.109960 | orchestrator | changed: [testbed-manager] 2025-08-29 19:23:47.109966 | orchestrator | 2025-08-29 19:23:47.109973 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-08-29 19:23:47.109980 | orchestrator | Friday 29 August 2025 19:23:10 +0000 (0:00:01.456) 0:00:44.476 ********* 2025-08-29 19:23:47.109986 | orchestrator | changed: [testbed-manager] 2025-08-29 19:23:47.109993 | orchestrator | 2025-08-29 19:23:47.109999 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-08-29 19:23:47.110006 | orchestrator | Friday 29 August 2025 19:23:11 +0000 (0:00:00.933) 0:00:45.410 ********* 2025-08-29 19:23:47.110012 | orchestrator | ok: [testbed-manager] 2025-08-29 19:23:47.110065 | orchestrator | 2025-08-29 19:23:47.110073 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:23:47.110079 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:23:47.110086 | orchestrator | 2025-08-29 19:23:47.110093 | orchestrator | 2025-08-29 19:23:47.110099 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:23:47.110106 | orchestrator | Friday 29 August 2025 19:23:12 +0000 (0:00:00.869) 0:00:46.279 ********* 2025-08-29 19:23:47.110113 | orchestrator | =============================================================================== 2025-08-29 19:23:47.110119 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 31.68s 2025-08-29 19:23:47.110126 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.66s 2025-08-29 19:23:47.110132 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.46s 2025-08-29 19:23:47.110139 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.11s 2025-08-29 19:23:47.110145 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.98s 2025-08-29 19:23:47.110152 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.46s 2025-08-29 19:23:47.110158 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.32s 2025-08-29 19:23:47.110165 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.93s 2025-08-29 19:23:47.110171 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.87s 2025-08-29 19:23:47.110178 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.54s 2025-08-29 19:23:47.110185 | orchestrator | 2025-08-29 19:23:47.110191 | orchestrator | 2025-08-29 19:23:47.110198 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-08-29 19:23:47.110204 | orchestrator | 2025-08-29 19:23:47.110211 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-08-29 19:23:47.110217 | orchestrator | Friday 29 August 2025 19:22:44 +0000 (0:00:00.191) 0:00:00.191 ********* 2025-08-29 19:23:47.110224 | orchestrator | ok: [testbed-manager] 2025-08-29 19:23:47.110230 | orchestrator | 2025-08-29 19:23:47.110237 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-08-29 19:23:47.110249 | orchestrator | Friday 29 August 2025 19:22:45 +0000 (0:00:00.993) 0:00:01.184 ********* 2025-08-29 19:23:47.110255 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-08-29 19:23:47.110262 | orchestrator | 2025-08-29 19:23:47.110269 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-08-29 19:23:47.110275 | orchestrator | Friday 29 August 2025 19:22:46 +0000 (0:00:01.094) 0:00:02.279 ********* 2025-08-29 19:23:47.110282 | orchestrator | changed: [testbed-manager] 2025-08-29 19:23:47.110288 | orchestrator | 2025-08-29 19:23:47.110297 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-08-29 19:23:47.110308 | orchestrator | Friday 29 August 2025 19:22:47 +0000 (0:00:01.452) 0:00:03.731 ********* 2025-08-29 19:23:47.110318 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-08-29 19:23:47.110445 | orchestrator | ok: [testbed-manager] 2025-08-29 19:23:47.110457 | orchestrator | 2025-08-29 19:23:47.110469 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-08-29 19:23:47.110480 | orchestrator | Friday 29 August 2025 19:23:37 +0000 (0:00:49.909) 0:00:53.640 ********* 2025-08-29 19:23:47.110490 | orchestrator | changed: [testbed-manager] 2025-08-29 19:23:47.110499 | orchestrator | 2025-08-29 19:23:47.110509 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:23:47.110519 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:23:47.110552 | orchestrator | 2025-08-29 19:23:47.110563 | orchestrator | 2025-08-29 19:23:47.110574 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:23:47.110596 | orchestrator | Friday 29 August 2025 19:23:43 +0000 (0:00:06.269) 0:00:59.910 ********* 2025-08-29 19:23:47.110607 | orchestrator | =============================================================================== 2025-08-29 19:23:47.110617 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 49.91s 2025-08-29 19:23:47.110627 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 6.27s 2025-08-29 19:23:47.110637 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.45s 2025-08-29 19:23:47.110648 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 1.09s 2025-08-29 19:23:47.110660 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.99s 2025-08-29 19:23:47.110671 | orchestrator | 2025-08-29 19:23:47 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:23:47.111554 | orchestrator | 2025-08-29 19:23:47 | INFO  | Task 6478122e-00c9-4db0-aa08-594db15e0275 is in state SUCCESS 2025-08-29 19:23:47.112599 | orchestrator | 2025-08-29 19:23:47.112653 | orchestrator | 2025-08-29 19:23:47.112667 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 19:23:47.112680 | orchestrator | 2025-08-29 19:23:47.112691 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 19:23:47.112703 | orchestrator | Friday 29 August 2025 19:22:28 +0000 (0:00:00.196) 0:00:00.196 ********* 2025-08-29 19:23:47.112714 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-08-29 19:23:47.112727 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-08-29 19:23:47.112738 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-08-29 19:23:47.112751 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-08-29 19:23:47.112763 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-08-29 19:23:47.112776 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-08-29 19:23:47.112787 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-08-29 19:23:47.112800 | orchestrator | 2025-08-29 19:23:47.112813 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-08-29 19:23:47.112824 | orchestrator | 2025-08-29 19:23:47.112851 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-08-29 19:23:47.113035 | orchestrator | Friday 29 August 2025 19:22:30 +0000 (0:00:01.134) 0:00:01.331 ********* 2025-08-29 19:23:47.113070 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:23:47.113086 | orchestrator | 2025-08-29 19:23:47.113097 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-08-29 19:23:47.113110 | orchestrator | Friday 29 August 2025 19:22:32 +0000 (0:00:02.553) 0:00:03.885 ********* 2025-08-29 19:23:47.113122 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:23:47.113134 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:23:47.113146 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:23:47.113156 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:23:47.113167 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:23:47.113178 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:23:47.113189 | orchestrator | ok: [testbed-manager] 2025-08-29 19:23:47.113199 | orchestrator | 2025-08-29 19:23:47.113209 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-08-29 19:23:47.113220 | orchestrator | Friday 29 August 2025 19:22:35 +0000 (0:00:02.824) 0:00:06.709 ********* 2025-08-29 19:23:47.113232 | orchestrator | ok: [testbed-manager] 2025-08-29 19:23:47.113243 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:23:47.113254 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:23:47.113265 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:23:47.113276 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:23:47.113288 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:23:47.113300 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:23:47.113313 | orchestrator | 2025-08-29 19:23:47.113326 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-08-29 19:23:47.113338 | orchestrator | Friday 29 August 2025 19:22:39 +0000 (0:00:04.007) 0:00:10.717 ********* 2025-08-29 19:23:47.113349 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:23:47.113360 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:23:47.113371 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:23:47.113382 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:23:47.113393 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:23:47.113404 | orchestrator | changed: [testbed-manager] 2025-08-29 19:23:47.113416 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:23:47.113427 | orchestrator | 2025-08-29 19:23:47.113438 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-08-29 19:23:47.113450 | orchestrator | Friday 29 August 2025 19:22:42 +0000 (0:00:02.595) 0:00:13.313 ********* 2025-08-29 19:23:47.113460 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:23:47.113470 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:23:47.113481 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:23:47.113491 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:23:47.113502 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:23:47.113513 | orchestrator | changed: [testbed-manager] 2025-08-29 19:23:47.113549 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:23:47.113561 | orchestrator | 2025-08-29 19:23:47.113571 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-08-29 19:23:47.113582 | orchestrator | Friday 29 August 2025 19:22:51 +0000 (0:00:09.483) 0:00:22.796 ********* 2025-08-29 19:23:47.113593 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:23:47.113605 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:23:47.113616 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:23:47.113628 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:23:47.113639 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:23:47.113650 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:23:47.113661 | orchestrator | changed: [testbed-manager] 2025-08-29 19:23:47.113672 | orchestrator | 2025-08-29 19:23:47.113683 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-08-29 19:23:47.113708 | orchestrator | Friday 29 August 2025 19:23:21 +0000 (0:00:29.534) 0:00:52.331 ********* 2025-08-29 19:23:47.113722 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:23:47.113735 | orchestrator | 2025-08-29 19:23:47.113747 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-08-29 19:23:47.113761 | orchestrator | Friday 29 August 2025 19:23:23 +0000 (0:00:01.961) 0:00:54.292 ********* 2025-08-29 19:23:47.113773 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-08-29 19:23:47.113787 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-08-29 19:23:47.113799 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-08-29 19:23:47.113810 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-08-29 19:23:47.113842 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-08-29 19:23:47.113856 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-08-29 19:23:47.113867 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-08-29 19:23:47.113880 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-08-29 19:23:47.113891 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-08-29 19:23:47.113905 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-08-29 19:23:47.113918 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-08-29 19:23:47.113931 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-08-29 19:23:47.113944 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-08-29 19:23:47.113955 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-08-29 19:23:47.113967 | orchestrator | 2025-08-29 19:23:47.113977 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-08-29 19:23:47.113993 | orchestrator | Friday 29 August 2025 19:23:27 +0000 (0:00:04.355) 0:00:58.647 ********* 2025-08-29 19:23:47.114007 | orchestrator | ok: [testbed-manager] 2025-08-29 19:23:47.114078 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:23:47.114094 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:23:47.114105 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:23:47.114116 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:23:47.114127 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:23:47.114137 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:23:47.114147 | orchestrator | 2025-08-29 19:23:47.114158 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-08-29 19:23:47.114169 | orchestrator | Friday 29 August 2025 19:23:28 +0000 (0:00:01.209) 0:00:59.857 ********* 2025-08-29 19:23:47.114180 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:23:47.114192 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:23:47.114204 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:23:47.114216 | orchestrator | changed: [testbed-manager] 2025-08-29 19:23:47.114227 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:23:47.114238 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:23:47.114249 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:23:47.114260 | orchestrator | 2025-08-29 19:23:47.114271 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-08-29 19:23:47.114282 | orchestrator | Friday 29 August 2025 19:23:31 +0000 (0:00:02.850) 0:01:02.708 ********* 2025-08-29 19:23:47.114293 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:23:47.114302 | orchestrator | ok: [testbed-manager] 2025-08-29 19:23:47.114313 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:23:47.114324 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:23:47.114335 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:23:47.114347 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:23:47.114358 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:23:47.114369 | orchestrator | 2025-08-29 19:23:47.114381 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-08-29 19:23:47.114404 | orchestrator | Friday 29 August 2025 19:23:33 +0000 (0:00:01.990) 0:01:04.698 ********* 2025-08-29 19:23:47.114415 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:23:47.114426 | orchestrator | ok: [testbed-manager] 2025-08-29 19:23:47.114437 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:23:47.114448 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:23:47.114459 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:23:47.114468 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:23:47.114478 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:23:47.114488 | orchestrator | 2025-08-29 19:23:47.114498 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-08-29 19:23:47.114509 | orchestrator | Friday 29 August 2025 19:23:35 +0000 (0:00:02.206) 0:01:06.905 ********* 2025-08-29 19:23:47.114520 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-08-29 19:23:47.114554 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:23:47.114567 | orchestrator | 2025-08-29 19:23:47.114578 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-08-29 19:23:47.114589 | orchestrator | Friday 29 August 2025 19:23:37 +0000 (0:00:01.731) 0:01:08.637 ********* 2025-08-29 19:23:47.114600 | orchestrator | changed: [testbed-manager] 2025-08-29 19:23:47.114611 | orchestrator | 2025-08-29 19:23:47.114622 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-08-29 19:23:47.114633 | orchestrator | Friday 29 August 2025 19:23:39 +0000 (0:00:02.373) 0:01:11.011 ********* 2025-08-29 19:23:47.114644 | orchestrator | changed: [testbed-manager] 2025-08-29 19:23:47.114656 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:23:47.114667 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:23:47.114679 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:23:47.114690 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:23:47.114701 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:23:47.114712 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:23:47.114723 | orchestrator | 2025-08-29 19:23:47.114734 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:23:47.114744 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:23:47.114754 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:23:47.114763 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:23:47.114773 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:23:47.114801 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:23:47.114812 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:23:47.114823 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:23:47.114833 | orchestrator | 2025-08-29 19:23:47.114843 | orchestrator | 2025-08-29 19:23:47.114849 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:23:47.114856 | orchestrator | Friday 29 August 2025 19:23:43 +0000 (0:00:04.002) 0:01:15.013 ********* 2025-08-29 19:23:47.114862 | orchestrator | =============================================================================== 2025-08-29 19:23:47.114868 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 29.53s 2025-08-29 19:23:47.114883 | orchestrator | osism.services.netdata : Add repository --------------------------------- 9.48s 2025-08-29 19:23:47.114889 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.36s 2025-08-29 19:23:47.114895 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.01s 2025-08-29 19:23:47.114901 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 4.00s 2025-08-29 19:23:47.114907 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.85s 2025-08-29 19:23:47.114914 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.82s 2025-08-29 19:23:47.114920 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.60s 2025-08-29 19:23:47.114926 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.55s 2025-08-29 19:23:47.114932 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.37s 2025-08-29 19:23:47.114938 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.21s 2025-08-29 19:23:47.114944 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.99s 2025-08-29 19:23:47.114951 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.96s 2025-08-29 19:23:47.114957 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.73s 2025-08-29 19:23:47.114963 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.21s 2025-08-29 19:23:47.114969 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.13s 2025-08-29 19:23:47.114976 | orchestrator | 2025-08-29 19:23:47 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:23:47.114982 | orchestrator | 2025-08-29 19:23:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:23:50.155169 | orchestrator | 2025-08-29 19:23:50 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:23:50.156809 | orchestrator | 2025-08-29 19:23:50 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:23:50.159736 | orchestrator | 2025-08-29 19:23:50 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:23:50.159974 | orchestrator | 2025-08-29 19:23:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:23:53.202114 | orchestrator | 2025-08-29 19:23:53 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:23:53.206793 | orchestrator | 2025-08-29 19:23:53 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:23:53.211003 | orchestrator | 2025-08-29 19:23:53 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:23:53.211085 | orchestrator | 2025-08-29 19:23:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:23:56.262609 | orchestrator | 2025-08-29 19:23:56 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:23:56.265256 | orchestrator | 2025-08-29 19:23:56 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:23:56.267641 | orchestrator | 2025-08-29 19:23:56 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:23:56.268148 | orchestrator | 2025-08-29 19:23:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:23:59.314316 | orchestrator | 2025-08-29 19:23:59 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:23:59.316844 | orchestrator | 2025-08-29 19:23:59 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:23:59.319307 | orchestrator | 2025-08-29 19:23:59 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:23:59.320045 | orchestrator | 2025-08-29 19:23:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:24:02.357241 | orchestrator | 2025-08-29 19:24:02 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:24:02.357386 | orchestrator | 2025-08-29 19:24:02 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:24:02.360930 | orchestrator | 2025-08-29 19:24:02 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:24:02.360998 | orchestrator | 2025-08-29 19:24:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:24:05.395742 | orchestrator | 2025-08-29 19:24:05 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:24:05.395827 | orchestrator | 2025-08-29 19:24:05 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:24:05.395837 | orchestrator | 2025-08-29 19:24:05 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:24:05.395846 | orchestrator | 2025-08-29 19:24:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:24:08.436702 | orchestrator | 2025-08-29 19:24:08 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:24:08.437088 | orchestrator | 2025-08-29 19:24:08 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:24:08.437989 | orchestrator | 2025-08-29 19:24:08 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:24:08.438071 | orchestrator | 2025-08-29 19:24:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:24:11.482564 | orchestrator | 2025-08-29 19:24:11 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:24:11.483484 | orchestrator | 2025-08-29 19:24:11 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:24:11.485171 | orchestrator | 2025-08-29 19:24:11 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:24:11.485203 | orchestrator | 2025-08-29 19:24:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:24:14.523130 | orchestrator | 2025-08-29 19:24:14 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:24:14.523917 | orchestrator | 2025-08-29 19:24:14 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:24:14.525413 | orchestrator | 2025-08-29 19:24:14 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:24:14.525457 | orchestrator | 2025-08-29 19:24:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:24:17.584920 | orchestrator | 2025-08-29 19:24:17 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:24:17.585762 | orchestrator | 2025-08-29 19:24:17 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:24:17.587014 | orchestrator | 2025-08-29 19:24:17 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:24:17.587038 | orchestrator | 2025-08-29 19:24:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:24:20.627590 | orchestrator | 2025-08-29 19:24:20 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:24:20.628896 | orchestrator | 2025-08-29 19:24:20 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:24:20.629810 | orchestrator | 2025-08-29 19:24:20 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:24:20.629989 | orchestrator | 2025-08-29 19:24:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:24:23.671586 | orchestrator | 2025-08-29 19:24:23 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:24:23.672081 | orchestrator | 2025-08-29 19:24:23 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:24:23.673198 | orchestrator | 2025-08-29 19:24:23 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:24:23.673214 | orchestrator | 2025-08-29 19:24:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:24:26.718913 | orchestrator | 2025-08-29 19:24:26 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:24:26.721367 | orchestrator | 2025-08-29 19:24:26 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:24:26.723853 | orchestrator | 2025-08-29 19:24:26 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:24:26.723909 | orchestrator | 2025-08-29 19:24:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:24:29.767105 | orchestrator | 2025-08-29 19:24:29 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:24:29.768996 | orchestrator | 2025-08-29 19:24:29 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:24:29.770627 | orchestrator | 2025-08-29 19:24:29 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:24:29.771261 | orchestrator | 2025-08-29 19:24:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:24:32.832201 | orchestrator | 2025-08-29 19:24:32 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:24:32.833456 | orchestrator | 2025-08-29 19:24:32 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:24:32.834359 | orchestrator | 2025-08-29 19:24:32 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:24:32.834648 | orchestrator | 2025-08-29 19:24:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:24:35.877000 | orchestrator | 2025-08-29 19:24:35 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:24:35.877073 | orchestrator | 2025-08-29 19:24:35 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:24:35.877081 | orchestrator | 2025-08-29 19:24:35 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:24:35.877091 | orchestrator | 2025-08-29 19:24:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:24:38.916364 | orchestrator | 2025-08-29 19:24:38 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:24:38.917325 | orchestrator | 2025-08-29 19:24:38 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:24:38.917876 | orchestrator | 2025-08-29 19:24:38 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:24:38.918120 | orchestrator | 2025-08-29 19:24:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:24:41.956621 | orchestrator | 2025-08-29 19:24:41 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:24:41.957223 | orchestrator | 2025-08-29 19:24:41 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:24:41.958297 | orchestrator | 2025-08-29 19:24:41 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:24:41.958517 | orchestrator | 2025-08-29 19:24:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:24:44.997601 | orchestrator | 2025-08-29 19:24:44 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:24:45.000243 | orchestrator | 2025-08-29 19:24:44 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:24:45.012882 | orchestrator | 2025-08-29 19:24:45 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:24:45.012971 | orchestrator | 2025-08-29 19:24:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:24:48.061923 | orchestrator | 2025-08-29 19:24:48 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:24:48.064814 | orchestrator | 2025-08-29 19:24:48 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:24:48.066847 | orchestrator | 2025-08-29 19:24:48 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:24:48.066897 | orchestrator | 2025-08-29 19:24:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:24:51.105765 | orchestrator | 2025-08-29 19:24:51 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:24:51.106154 | orchestrator | 2025-08-29 19:24:51 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:24:51.108628 | orchestrator | 2025-08-29 19:24:51 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:24:51.108666 | orchestrator | 2025-08-29 19:24:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:24:54.151734 | orchestrator | 2025-08-29 19:24:54 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:24:54.153665 | orchestrator | 2025-08-29 19:24:54 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:24:54.155209 | orchestrator | 2025-08-29 19:24:54 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:24:54.155358 | orchestrator | 2025-08-29 19:24:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:24:57.201799 | orchestrator | 2025-08-29 19:24:57 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:24:57.206270 | orchestrator | 2025-08-29 19:24:57 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:24:57.207309 | orchestrator | 2025-08-29 19:24:57 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:24:57.207387 | orchestrator | 2025-08-29 19:24:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:25:00.253352 | orchestrator | 2025-08-29 19:25:00 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:25:00.254512 | orchestrator | 2025-08-29 19:25:00 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:25:00.255215 | orchestrator | 2025-08-29 19:25:00 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:25:00.255265 | orchestrator | 2025-08-29 19:25:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:25:03.294354 | orchestrator | 2025-08-29 19:25:03 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:25:03.296370 | orchestrator | 2025-08-29 19:25:03 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:25:03.298863 | orchestrator | 2025-08-29 19:25:03 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:25:03.298970 | orchestrator | 2025-08-29 19:25:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:25:06.353238 | orchestrator | 2025-08-29 19:25:06 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:25:06.354667 | orchestrator | 2025-08-29 19:25:06 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state STARTED 2025-08-29 19:25:06.356558 | orchestrator | 2025-08-29 19:25:06 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:25:06.356582 | orchestrator | 2025-08-29 19:25:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:25:09.392970 | orchestrator | 2025-08-29 19:25:09 | INFO  | Task bec406b8-4005-4a6f-95c6-b5856b4c80b2 is in state STARTED 2025-08-29 19:25:09.393213 | orchestrator | 2025-08-29 19:25:09 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:25:09.398404 | orchestrator | 2025-08-29 19:25:09 | INFO  | Task 6d2d2991-6002-47d0-a676-ae214a323ce1 is in state SUCCESS 2025-08-29 19:25:09.402222 | orchestrator | 2025-08-29 19:25:09.402304 | orchestrator | 2025-08-29 19:25:09.402319 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-08-29 19:25:09.402331 | orchestrator | 2025-08-29 19:25:09.402342 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-08-29 19:25:09.402352 | orchestrator | Friday 29 August 2025 19:22:20 +0000 (0:00:00.284) 0:00:00.284 ********* 2025-08-29 19:25:09.402363 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:25:09.402375 | orchestrator | 2025-08-29 19:25:09.402385 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-08-29 19:25:09.402395 | orchestrator | Friday 29 August 2025 19:22:21 +0000 (0:00:01.111) 0:00:01.395 ********* 2025-08-29 19:25:09.402404 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 19:25:09.402414 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 19:25:09.402424 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 19:25:09.402433 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 19:25:09.402443 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 19:25:09.402452 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 19:25:09.402462 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 19:25:09.402472 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 19:25:09.402512 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 19:25:09.402522 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 19:25:09.402532 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 19:25:09.402542 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 19:25:09.402552 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 19:25:09.402563 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 19:25:09.402572 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 19:25:09.402582 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 19:25:09.402611 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 19:25:09.402622 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 19:25:09.402632 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 19:25:09.402641 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 19:25:09.402672 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 19:25:09.402682 | orchestrator | 2025-08-29 19:25:09.402692 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-08-29 19:25:09.402702 | orchestrator | Friday 29 August 2025 19:22:26 +0000 (0:00:04.448) 0:00:05.843 ********* 2025-08-29 19:25:09.402712 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:25:09.402723 | orchestrator | 2025-08-29 19:25:09.402733 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-08-29 19:25:09.402743 | orchestrator | Friday 29 August 2025 19:22:27 +0000 (0:00:01.323) 0:00:07.167 ********* 2025-08-29 19:25:09.402757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 19:25:09.402773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 19:25:09.402814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 19:25:09.402825 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 19:25:09.402836 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 19:25:09.402851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.402871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.402881 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 19:25:09.402891 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 19:25:09.402918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.402930 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.402940 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.402951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.402982 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.402994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.403004 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.403014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.403031 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.403042 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.403052 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.403062 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.403078 | orchestrator | 2025-08-29 19:25:09.403088 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-08-29 19:25:09.403098 | orchestrator | Friday 29 August 2025 19:22:33 +0000 (0:00:05.891) 0:00:13.058 ********* 2025-08-29 19:25:09.403113 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 19:25:09.403124 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:25:09.403134 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:25:09.403145 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:25:09.403155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 19:25:09.403172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:25:09.403182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:25:09.403193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 19:25:09.403209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:25:09.403224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:25:09.403235 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:25:09.403245 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:25:09.403255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 19:25:09.403266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:25:09.403276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:25:09.403286 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:25:09.403301 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 19:25:09.403312 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:25:09.403336 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:25:09.403346 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:25:09.403356 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 19:25:09.403367 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:25:09.403377 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 19:25:09.403387 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:25:09.403403 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:25:09.403420 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:25:09.403431 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:25:09.403441 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:25:09.403467 | orchestrator | 2025-08-29 19:25:09.403508 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-08-29 19:25:09.403525 | orchestrator | Friday 29 August 2025 19:22:34 +0000 (0:00:01.534) 0:00:14.593 ********* 2025-08-29 19:25:09.403543 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 19:25:09.403561 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:25:09.403581 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:25:09.403591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 19:25:09.403601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:25:09.403611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:25:09.403621 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:25:09.403639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 19:25:09.403656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:25:09.403666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:25:09.403676 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:25:09.403686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 19:25:09.403701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:25:09.403711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:25:09.403721 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:25:09.403731 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 19:25:09.403747 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:25:09.403763 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:25:09.403773 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:25:09.403783 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:25:09.403793 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 19:25:09.403803 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:25:09.403817 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:25:09.403828 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:25:09.403837 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 19:25:09.403848 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:25:09.403858 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:25:09.403868 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:25:09.403878 | orchestrator | 2025-08-29 19:25:09.403893 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-08-29 19:25:09.403903 | orchestrator | Friday 29 August 2025 19:22:37 +0000 (0:00:03.091) 0:00:17.684 ********* 2025-08-29 19:25:09.403912 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:25:09.403922 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:25:09.403932 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:25:09.403941 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:25:09.403951 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:25:09.403966 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:25:09.403976 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:25:09.403986 | orchestrator | 2025-08-29 19:25:09.403995 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-08-29 19:25:09.404005 | orchestrator | Friday 29 August 2025 19:22:39 +0000 (0:00:01.170) 0:00:18.855 ********* 2025-08-29 19:25:09.404015 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:25:09.404025 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:25:09.404034 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:25:09.404043 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:25:09.404053 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:25:09.404063 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:25:09.404072 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:25:09.404082 | orchestrator | 2025-08-29 19:25:09.404091 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-08-29 19:25:09.404101 | orchestrator | Friday 29 August 2025 19:22:41 +0000 (0:00:02.321) 0:00:21.177 ********* 2025-08-29 19:25:09.404111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 19:25:09.404121 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 19:25:09.404135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 19:25:09.404146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.404156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 19:25:09.404172 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.404194 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 19:25:09.404205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.404215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.404226 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 19:25:09.404240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.404251 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 19:25:09.404267 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.404277 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.404294 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.404305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.404315 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.404325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.404339 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.404350 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.404365 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.404375 | orchestrator | 2025-08-29 19:25:09.404385 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-08-29 19:25:09.404395 | orchestrator | Friday 29 August 2025 19:22:47 +0000 (0:00:06.082) 0:00:27.259 ********* 2025-08-29 19:25:09.404405 | orchestrator | [WARNING]: Skipped 2025-08-29 19:25:09.404415 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-08-29 19:25:09.404424 | orchestrator | to this access issue: 2025-08-29 19:25:09.404435 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-08-29 19:25:09.404451 | orchestrator | directory 2025-08-29 19:25:09.404467 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 19:25:09.404516 | orchestrator | 2025-08-29 19:25:09.404532 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-08-29 19:25:09.404548 | orchestrator | Friday 29 August 2025 19:22:48 +0000 (0:00:00.948) 0:00:28.207 ********* 2025-08-29 19:25:09.404562 | orchestrator | [WARNING]: Skipped 2025-08-29 19:25:09.404576 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-08-29 19:25:09.404598 | orchestrator | to this access issue: 2025-08-29 19:25:09.404613 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-08-29 19:25:09.404627 | orchestrator | directory 2025-08-29 19:25:09.404640 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 19:25:09.404655 | orchestrator | 2025-08-29 19:25:09.404669 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-08-29 19:25:09.404683 | orchestrator | Friday 29 August 2025 19:22:49 +0000 (0:00:01.050) 0:00:29.258 ********* 2025-08-29 19:25:09.404697 | orchestrator | [WARNING]: Skipped 2025-08-29 19:25:09.404711 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-08-29 19:25:09.404725 | orchestrator | to this access issue: 2025-08-29 19:25:09.404742 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-08-29 19:25:09.404757 | orchestrator | directory 2025-08-29 19:25:09.404771 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 19:25:09.404786 | orchestrator | 2025-08-29 19:25:09.404802 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-08-29 19:25:09.404819 | orchestrator | Friday 29 August 2025 19:22:50 +0000 (0:00:00.670) 0:00:29.928 ********* 2025-08-29 19:25:09.404835 | orchestrator | [WARNING]: Skipped 2025-08-29 19:25:09.404852 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-08-29 19:25:09.404864 | orchestrator | to this access issue: 2025-08-29 19:25:09.404873 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-08-29 19:25:09.404882 | orchestrator | directory 2025-08-29 19:25:09.404892 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 19:25:09.404901 | orchestrator | 2025-08-29 19:25:09.404911 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-08-29 19:25:09.404920 | orchestrator | Friday 29 August 2025 19:22:50 +0000 (0:00:00.779) 0:00:30.707 ********* 2025-08-29 19:25:09.404930 | orchestrator | changed: [testbed-manager] 2025-08-29 19:25:09.404940 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:25:09.404962 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:25:09.404972 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:25:09.404982 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:25:09.404991 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:25:09.405007 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:25:09.405022 | orchestrator | 2025-08-29 19:25:09.405039 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-08-29 19:25:09.405055 | orchestrator | Friday 29 August 2025 19:22:55 +0000 (0:00:04.957) 0:00:35.664 ********* 2025-08-29 19:25:09.405072 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 19:25:09.405083 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 19:25:09.405093 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 19:25:09.405109 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 19:25:09.405119 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 19:25:09.405129 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 19:25:09.405138 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 19:25:09.405147 | orchestrator | 2025-08-29 19:25:09.405157 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-08-29 19:25:09.405167 | orchestrator | Friday 29 August 2025 19:22:58 +0000 (0:00:02.972) 0:00:38.637 ********* 2025-08-29 19:25:09.405176 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:25:09.405186 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:25:09.405195 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:25:09.405204 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:25:09.405214 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:25:09.405223 | orchestrator | changed: [testbed-manager] 2025-08-29 19:25:09.405233 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:25:09.405242 | orchestrator | 2025-08-29 19:25:09.405252 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-08-29 19:25:09.405261 | orchestrator | Friday 29 August 2025 19:23:03 +0000 (0:00:04.306) 0:00:42.944 ********* 2025-08-29 19:25:09.405272 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 19:25:09.405291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:25:09.405302 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 19:25:09.405320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:25:09.405330 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 19:25:09.405341 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:25:09.405352 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.405369 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 19:25:09.405379 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:25:09.405396 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 19:25:09.405413 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 19:25:09.405423 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:25:09.405433 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:25:09.405448 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 19:25:09.405458 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.405469 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.405537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:25:09.405569 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.405588 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.405599 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.405609 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.405619 | orchestrator | 2025-08-29 19:25:09.405628 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-08-29 19:25:09.405638 | orchestrator | Friday 29 August 2025 19:23:05 +0000 (0:00:02.259) 0:00:45.203 ********* 2025-08-29 19:25:09.405648 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 19:25:09.405658 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 19:25:09.405668 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 19:25:09.405682 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 19:25:09.405691 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 19:25:09.405701 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 19:25:09.405710 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 19:25:09.405719 | orchestrator | 2025-08-29 19:25:09.405729 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-08-29 19:25:09.405739 | orchestrator | Friday 29 August 2025 19:23:08 +0000 (0:00:03.015) 0:00:48.218 ********* 2025-08-29 19:25:09.405749 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 19:25:09.405758 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 19:25:09.405768 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 19:25:09.405777 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 19:25:09.405787 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 19:25:09.405796 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 19:25:09.405806 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 19:25:09.405815 | orchestrator | 2025-08-29 19:25:09.405825 | orchestrator | TASK [common : Check common containers] **************************************** 2025-08-29 19:25:09.405834 | orchestrator | Friday 29 August 2025 19:23:11 +0000 (0:00:02.784) 0:00:51.002 ********* 2025-08-29 19:25:09.405850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 19:25:09.405866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 19:25:09.405877 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 19:25:09.405888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 19:25:09.405898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.405913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.405923 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 19:25:09.405940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.406199 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.406224 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 19:25:09.406235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.406247 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 19:25:09.406264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.406275 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.406285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.406305 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.406331 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.406342 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.406352 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.406362 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.406372 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:25:09.406382 | orchestrator | 2025-08-29 19:25:09.406392 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-08-29 19:25:09.406406 | orchestrator | Friday 29 August 2025 19:23:14 +0000 (0:00:03.666) 0:00:54.669 ********* 2025-08-29 19:25:09.406417 | orchestrator | changed: [testbed-manager] 2025-08-29 19:25:09.406427 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:25:09.406436 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:25:09.406446 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:25:09.406456 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:25:09.406465 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:25:09.406475 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:25:09.406512 | orchestrator | 2025-08-29 19:25:09.406529 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-08-29 19:25:09.406539 | orchestrator | Friday 29 August 2025 19:23:16 +0000 (0:00:01.914) 0:00:56.584 ********* 2025-08-29 19:25:09.406548 | orchestrator | changed: [testbed-manager] 2025-08-29 19:25:09.406558 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:25:09.406568 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:25:09.406577 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:25:09.406587 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:25:09.406596 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:25:09.406605 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:25:09.406615 | orchestrator | 2025-08-29 19:25:09.406625 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 19:25:09.406634 | orchestrator | Friday 29 August 2025 19:23:18 +0000 (0:00:01.372) 0:00:57.957 ********* 2025-08-29 19:25:09.406644 | orchestrator | 2025-08-29 19:25:09.406654 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 19:25:09.406664 | orchestrator | Friday 29 August 2025 19:23:18 +0000 (0:00:00.068) 0:00:58.026 ********* 2025-08-29 19:25:09.406679 | orchestrator | 2025-08-29 19:25:09.406694 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 19:25:09.406710 | orchestrator | Friday 29 August 2025 19:23:18 +0000 (0:00:00.076) 0:00:58.102 ********* 2025-08-29 19:25:09.406725 | orchestrator | 2025-08-29 19:25:09.406740 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 19:25:09.406755 | orchestrator | Friday 29 August 2025 19:23:18 +0000 (0:00:00.075) 0:00:58.178 ********* 2025-08-29 19:25:09.406771 | orchestrator | 2025-08-29 19:25:09.406786 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 19:25:09.406802 | orchestrator | Friday 29 August 2025 19:23:18 +0000 (0:00:00.259) 0:00:58.438 ********* 2025-08-29 19:25:09.406818 | orchestrator | 2025-08-29 19:25:09.406834 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 19:25:09.406852 | orchestrator | Friday 29 August 2025 19:23:18 +0000 (0:00:00.065) 0:00:58.503 ********* 2025-08-29 19:25:09.406869 | orchestrator | 2025-08-29 19:25:09.406886 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 19:25:09.406903 | orchestrator | Friday 29 August 2025 19:23:18 +0000 (0:00:00.067) 0:00:58.570 ********* 2025-08-29 19:25:09.406916 | orchestrator | 2025-08-29 19:25:09.406927 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-08-29 19:25:09.406947 | orchestrator | Friday 29 August 2025 19:23:18 +0000 (0:00:00.098) 0:00:58.669 ********* 2025-08-29 19:25:09.406958 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:25:09.406970 | orchestrator | changed: [testbed-manager] 2025-08-29 19:25:09.406981 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:25:09.406992 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:25:09.407002 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:25:09.407011 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:25:09.407020 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:25:09.407030 | orchestrator | 2025-08-29 19:25:09.407040 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-08-29 19:25:09.407049 | orchestrator | Friday 29 August 2025 19:24:05 +0000 (0:00:46.301) 0:01:44.970 ********* 2025-08-29 19:25:09.407059 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:25:09.407069 | orchestrator | changed: [testbed-manager] 2025-08-29 19:25:09.407078 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:25:09.407088 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:25:09.407097 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:25:09.407107 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:25:09.407116 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:25:09.407126 | orchestrator | 2025-08-29 19:25:09.407135 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-08-29 19:25:09.407145 | orchestrator | Friday 29 August 2025 19:24:56 +0000 (0:00:50.904) 0:02:35.875 ********* 2025-08-29 19:25:09.407167 | orchestrator | ok: [testbed-manager] 2025-08-29 19:25:09.407178 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:25:09.407187 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:25:09.407197 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:25:09.407206 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:25:09.407216 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:25:09.407225 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:25:09.407235 | orchestrator | 2025-08-29 19:25:09.407244 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-08-29 19:25:09.407254 | orchestrator | Friday 29 August 2025 19:24:58 +0000 (0:00:02.025) 0:02:37.900 ********* 2025-08-29 19:25:09.407264 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:25:09.407274 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:25:09.407283 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:25:09.407293 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:25:09.407302 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:25:09.407312 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:25:09.407321 | orchestrator | changed: [testbed-manager] 2025-08-29 19:25:09.407331 | orchestrator | 2025-08-29 19:25:09.407341 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:25:09.407352 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 19:25:09.407362 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 19:25:09.407377 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 19:25:09.407388 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 19:25:09.407397 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 19:25:09.407407 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 19:25:09.407417 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 19:25:09.407426 | orchestrator | 2025-08-29 19:25:09.407436 | orchestrator | 2025-08-29 19:25:09.407446 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:25:09.407455 | orchestrator | Friday 29 August 2025 19:25:06 +0000 (0:00:08.684) 0:02:46.584 ********* 2025-08-29 19:25:09.407465 | orchestrator | =============================================================================== 2025-08-29 19:25:09.407474 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 50.90s 2025-08-29 19:25:09.407544 | orchestrator | common : Restart fluentd container ------------------------------------- 46.30s 2025-08-29 19:25:09.407554 | orchestrator | common : Restart cron container ----------------------------------------- 8.68s 2025-08-29 19:25:09.407564 | orchestrator | common : Copying over config.json files for services -------------------- 6.08s 2025-08-29 19:25:09.407573 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.89s 2025-08-29 19:25:09.407583 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.96s 2025-08-29 19:25:09.407592 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.45s 2025-08-29 19:25:09.407602 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 4.31s 2025-08-29 19:25:09.407611 | orchestrator | common : Check common containers ---------------------------------------- 3.67s 2025-08-29 19:25:09.407621 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.09s 2025-08-29 19:25:09.407637 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.02s 2025-08-29 19:25:09.407647 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.97s 2025-08-29 19:25:09.407656 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.78s 2025-08-29 19:25:09.407666 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 2.32s 2025-08-29 19:25:09.407681 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.26s 2025-08-29 19:25:09.407691 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.03s 2025-08-29 19:25:09.407701 | orchestrator | common : Creating log volume -------------------------------------------- 1.91s 2025-08-29 19:25:09.407710 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.53s 2025-08-29 19:25:09.407720 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.37s 2025-08-29 19:25:09.407730 | orchestrator | common : include_tasks -------------------------------------------------- 1.32s 2025-08-29 19:25:09.407739 | orchestrator | 2025-08-29 19:25:09 | INFO  | Task 6a2d30f8-1140-4c7a-b933-be54ac6d5b1f is in state STARTED 2025-08-29 19:25:09.407749 | orchestrator | 2025-08-29 19:25:09 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:25:09.407759 | orchestrator | 2025-08-29 19:25:09 | INFO  | Task 0c2cd707-5a91-4bc5-a3e5-ed7475c2a9bf is in state STARTED 2025-08-29 19:25:09.407769 | orchestrator | 2025-08-29 19:25:09 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:25:09.407778 | orchestrator | 2025-08-29 19:25:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:25:12.493922 | orchestrator | 2025-08-29 19:25:12 | INFO  | Task bec406b8-4005-4a6f-95c6-b5856b4c80b2 is in state STARTED 2025-08-29 19:25:12.494084 | orchestrator | 2025-08-29 19:25:12 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:25:12.494101 | orchestrator | 2025-08-29 19:25:12 | INFO  | Task 6a2d30f8-1140-4c7a-b933-be54ac6d5b1f is in state STARTED 2025-08-29 19:25:12.494113 | orchestrator | 2025-08-29 19:25:12 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:25:12.494124 | orchestrator | 2025-08-29 19:25:12 | INFO  | Task 0c2cd707-5a91-4bc5-a3e5-ed7475c2a9bf is in state STARTED 2025-08-29 19:25:12.494136 | orchestrator | 2025-08-29 19:25:12 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:25:12.494147 | orchestrator | 2025-08-29 19:25:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:25:15.528972 | orchestrator | 2025-08-29 19:25:15 | INFO  | Task bec406b8-4005-4a6f-95c6-b5856b4c80b2 is in state STARTED 2025-08-29 19:25:15.529089 | orchestrator | 2025-08-29 19:25:15 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:25:15.529103 | orchestrator | 2025-08-29 19:25:15 | INFO  | Task 6a2d30f8-1140-4c7a-b933-be54ac6d5b1f is in state STARTED 2025-08-29 19:25:15.530961 | orchestrator | 2025-08-29 19:25:15 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:25:15.531031 | orchestrator | 2025-08-29 19:25:15 | INFO  | Task 0c2cd707-5a91-4bc5-a3e5-ed7475c2a9bf is in state STARTED 2025-08-29 19:25:15.531043 | orchestrator | 2025-08-29 19:25:15 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:25:15.531053 | orchestrator | 2025-08-29 19:25:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:25:18.556985 | orchestrator | 2025-08-29 19:25:18 | INFO  | Task bec406b8-4005-4a6f-95c6-b5856b4c80b2 is in state STARTED 2025-08-29 19:25:18.557094 | orchestrator | 2025-08-29 19:25:18 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:25:18.557251 | orchestrator | 2025-08-29 19:25:18 | INFO  | Task 6a2d30f8-1140-4c7a-b933-be54ac6d5b1f is in state STARTED 2025-08-29 19:25:18.559679 | orchestrator | 2025-08-29 19:25:18 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:25:18.560161 | orchestrator | 2025-08-29 19:25:18 | INFO  | Task 0c2cd707-5a91-4bc5-a3e5-ed7475c2a9bf is in state STARTED 2025-08-29 19:25:18.560862 | orchestrator | 2025-08-29 19:25:18 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:25:18.560888 | orchestrator | 2025-08-29 19:25:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:25:21.683394 | orchestrator | 2025-08-29 19:25:21 | INFO  | Task bec406b8-4005-4a6f-95c6-b5856b4c80b2 is in state STARTED 2025-08-29 19:25:21.684619 | orchestrator | 2025-08-29 19:25:21 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:25:21.684662 | orchestrator | 2025-08-29 19:25:21 | INFO  | Task 6a2d30f8-1140-4c7a-b933-be54ac6d5b1f is in state STARTED 2025-08-29 19:25:21.684676 | orchestrator | 2025-08-29 19:25:21 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:25:21.684687 | orchestrator | 2025-08-29 19:25:21 | INFO  | Task 0c2cd707-5a91-4bc5-a3e5-ed7475c2a9bf is in state STARTED 2025-08-29 19:25:21.684698 | orchestrator | 2025-08-29 19:25:21 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:25:21.684709 | orchestrator | 2025-08-29 19:25:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:25:24.639391 | orchestrator | 2025-08-29 19:25:24 | INFO  | Task bec406b8-4005-4a6f-95c6-b5856b4c80b2 is in state STARTED 2025-08-29 19:25:24.639836 | orchestrator | 2025-08-29 19:25:24 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:25:24.642787 | orchestrator | 2025-08-29 19:25:24 | INFO  | Task 6a2d30f8-1140-4c7a-b933-be54ac6d5b1f is in state STARTED 2025-08-29 19:25:24.643439 | orchestrator | 2025-08-29 19:25:24 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:25:24.644019 | orchestrator | 2025-08-29 19:25:24 | INFO  | Task 0c2cd707-5a91-4bc5-a3e5-ed7475c2a9bf is in state STARTED 2025-08-29 19:25:24.644716 | orchestrator | 2025-08-29 19:25:24 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:25:24.644760 | orchestrator | 2025-08-29 19:25:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:25:27.694549 | orchestrator | 2025-08-29 19:25:27 | INFO  | Task bec406b8-4005-4a6f-95c6-b5856b4c80b2 is in state STARTED 2025-08-29 19:25:27.695794 | orchestrator | 2025-08-29 19:25:27 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:25:27.696422 | orchestrator | 2025-08-29 19:25:27 | INFO  | Task 6a2d30f8-1140-4c7a-b933-be54ac6d5b1f is in state STARTED 2025-08-29 19:25:27.696684 | orchestrator | 2025-08-29 19:25:27 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:25:27.697174 | orchestrator | 2025-08-29 19:25:27 | INFO  | Task 0c2cd707-5a91-4bc5-a3e5-ed7475c2a9bf is in state STARTED 2025-08-29 19:25:27.698068 | orchestrator | 2025-08-29 19:25:27 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:25:27.698173 | orchestrator | 2025-08-29 19:25:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:25:30.726152 | orchestrator | 2025-08-29 19:25:30 | INFO  | Task bec406b8-4005-4a6f-95c6-b5856b4c80b2 is in state STARTED 2025-08-29 19:25:30.727830 | orchestrator | 2025-08-29 19:25:30 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:25:30.730216 | orchestrator | 2025-08-29 19:25:30 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:25:30.733052 | orchestrator | 2025-08-29 19:25:30 | INFO  | Task 6a2d30f8-1140-4c7a-b933-be54ac6d5b1f is in state STARTED 2025-08-29 19:25:30.733760 | orchestrator | 2025-08-29 19:25:30 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:25:30.735595 | orchestrator | 2025-08-29 19:25:30 | INFO  | Task 0c2cd707-5a91-4bc5-a3e5-ed7475c2a9bf is in state SUCCESS 2025-08-29 19:25:30.736953 | orchestrator | 2025-08-29 19:25:30 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:25:30.736983 | orchestrator | 2025-08-29 19:25:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:25:33.775422 | orchestrator | 2025-08-29 19:25:33 | INFO  | Task bec406b8-4005-4a6f-95c6-b5856b4c80b2 is in state STARTED 2025-08-29 19:25:33.777746 | orchestrator | 2025-08-29 19:25:33 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:25:33.777821 | orchestrator | 2025-08-29 19:25:33 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:25:33.777838 | orchestrator | 2025-08-29 19:25:33 | INFO  | Task 6a2d30f8-1140-4c7a-b933-be54ac6d5b1f is in state STARTED 2025-08-29 19:25:33.779554 | orchestrator | 2025-08-29 19:25:33 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:25:33.779591 | orchestrator | 2025-08-29 19:25:33 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:25:33.779603 | orchestrator | 2025-08-29 19:25:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:25:36.804047 | orchestrator | 2025-08-29 19:25:36 | INFO  | Task bec406b8-4005-4a6f-95c6-b5856b4c80b2 is in state STARTED 2025-08-29 19:25:36.804344 | orchestrator | 2025-08-29 19:25:36 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:25:36.804934 | orchestrator | 2025-08-29 19:25:36 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:25:36.805604 | orchestrator | 2025-08-29 19:25:36 | INFO  | Task 6a2d30f8-1140-4c7a-b933-be54ac6d5b1f is in state STARTED 2025-08-29 19:25:36.806672 | orchestrator | 2025-08-29 19:25:36 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:25:36.808224 | orchestrator | 2025-08-29 19:25:36 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:25:36.808266 | orchestrator | 2025-08-29 19:25:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:25:39.889052 | orchestrator | 2025-08-29 19:25:39 | INFO  | Task bec406b8-4005-4a6f-95c6-b5856b4c80b2 is in state STARTED 2025-08-29 19:25:39.889130 | orchestrator | 2025-08-29 19:25:39 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:25:39.889139 | orchestrator | 2025-08-29 19:25:39 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:25:39.890691 | orchestrator | 2025-08-29 19:25:39.890713 | orchestrator | 2025-08-29 19:25:39.890719 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 19:25:39.890725 | orchestrator | 2025-08-29 19:25:39.890731 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 19:25:39.890737 | orchestrator | Friday 29 August 2025 19:25:13 +0000 (0:00:00.332) 0:00:00.332 ********* 2025-08-29 19:25:39.890742 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:25:39.890749 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:25:39.890754 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:25:39.890759 | orchestrator | 2025-08-29 19:25:39.890764 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 19:25:39.890789 | orchestrator | Friday 29 August 2025 19:25:13 +0000 (0:00:00.437) 0:00:00.769 ********* 2025-08-29 19:25:39.890795 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-08-29 19:25:39.890801 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-08-29 19:25:39.890806 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-08-29 19:25:39.890811 | orchestrator | 2025-08-29 19:25:39.890816 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-08-29 19:25:39.890822 | orchestrator | 2025-08-29 19:25:39.890827 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-08-29 19:25:39.890832 | orchestrator | Friday 29 August 2025 19:25:14 +0000 (0:00:00.587) 0:00:01.356 ********* 2025-08-29 19:25:39.890837 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:25:39.890844 | orchestrator | 2025-08-29 19:25:39.890849 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-08-29 19:25:39.890855 | orchestrator | Friday 29 August 2025 19:25:15 +0000 (0:00:00.774) 0:00:02.131 ********* 2025-08-29 19:25:39.890860 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-08-29 19:25:39.890865 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-08-29 19:25:39.890871 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-08-29 19:25:39.890876 | orchestrator | 2025-08-29 19:25:39.890881 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-08-29 19:25:39.890887 | orchestrator | Friday 29 August 2025 19:25:16 +0000 (0:00:00.944) 0:00:03.076 ********* 2025-08-29 19:25:39.890892 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-08-29 19:25:39.890897 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-08-29 19:25:39.890902 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-08-29 19:25:39.890907 | orchestrator | 2025-08-29 19:25:39.890912 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-08-29 19:25:39.890918 | orchestrator | Friday 29 August 2025 19:25:18 +0000 (0:00:01.997) 0:00:05.074 ********* 2025-08-29 19:25:39.890923 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:25:39.890928 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:25:39.890947 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:25:39.890952 | orchestrator | 2025-08-29 19:25:39.890957 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-08-29 19:25:39.890963 | orchestrator | Friday 29 August 2025 19:25:20 +0000 (0:00:02.009) 0:00:07.083 ********* 2025-08-29 19:25:39.890968 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:25:39.890973 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:25:39.890978 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:25:39.890983 | orchestrator | 2025-08-29 19:25:39.890988 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:25:39.890994 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:25:39.891000 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:25:39.891005 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:25:39.891010 | orchestrator | 2025-08-29 19:25:39.891016 | orchestrator | 2025-08-29 19:25:39.891021 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:25:39.891026 | orchestrator | Friday 29 August 2025 19:25:28 +0000 (0:00:08.351) 0:00:15.435 ********* 2025-08-29 19:25:39.891031 | orchestrator | =============================================================================== 2025-08-29 19:25:39.891036 | orchestrator | memcached : Restart memcached container --------------------------------- 8.35s 2025-08-29 19:25:39.891041 | orchestrator | memcached : Check memcached container ----------------------------------- 2.01s 2025-08-29 19:25:39.891051 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.00s 2025-08-29 19:25:39.891056 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.95s 2025-08-29 19:25:39.891061 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.77s 2025-08-29 19:25:39.891066 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.59s 2025-08-29 19:25:39.891071 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.44s 2025-08-29 19:25:39.891076 | orchestrator | 2025-08-29 19:25:39.891081 | orchestrator | 2025-08-29 19:25:39.891086 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 19:25:39.891091 | orchestrator | 2025-08-29 19:25:39.891097 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 19:25:39.891102 | orchestrator | Friday 29 August 2025 19:25:14 +0000 (0:00:00.362) 0:00:00.362 ********* 2025-08-29 19:25:39.891107 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:25:39.891112 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:25:39.891117 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:25:39.891122 | orchestrator | 2025-08-29 19:25:39.891128 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 19:25:39.891162 | orchestrator | Friday 29 August 2025 19:25:14 +0000 (0:00:00.386) 0:00:00.748 ********* 2025-08-29 19:25:39.891168 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-08-29 19:25:39.891173 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-08-29 19:25:39.891178 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-08-29 19:25:39.891183 | orchestrator | 2025-08-29 19:25:39.891189 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-08-29 19:25:39.891194 | orchestrator | 2025-08-29 19:25:39.891199 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-08-29 19:25:39.891204 | orchestrator | Friday 29 August 2025 19:25:15 +0000 (0:00:00.499) 0:00:01.247 ********* 2025-08-29 19:25:39.891209 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:25:39.891214 | orchestrator | 2025-08-29 19:25:39.891219 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-08-29 19:25:39.891224 | orchestrator | Friday 29 August 2025 19:25:15 +0000 (0:00:00.670) 0:00:01.918 ********* 2025-08-29 19:25:39.891231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 19:25:39.891244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 19:25:39.891250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 19:25:39.891260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 19:25:39.891266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 19:25:39.891276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 19:25:39.891282 | orchestrator | 2025-08-29 19:25:39.891287 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-08-29 19:25:39.891292 | orchestrator | Friday 29 August 2025 19:25:17 +0000 (0:00:01.330) 0:00:03.249 ********* 2025-08-29 19:25:39.891297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 19:25:39.891306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 19:25:39.891313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 19:25:39.891338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 19:25:39.891345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 19:25:39.891356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 19:25:39.891363 | orchestrator | 2025-08-29 19:25:39.891369 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-08-29 19:25:39.891375 | orchestrator | Friday 29 August 2025 19:25:20 +0000 (0:00:03.189) 0:00:06.438 ********* 2025-08-29 19:25:39.891381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 19:25:39.891391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 19:25:39.891398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 19:25:39.891408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 19:25:39.891414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 19:25:39.891421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 19:25:39.891427 | orchestrator | 2025-08-29 19:25:39.891436 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-08-29 19:25:39.891442 | orchestrator | Friday 29 August 2025 19:25:23 +0000 (0:00:02.778) 0:00:09.217 ********* 2025-08-29 19:25:39.891449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 19:25:39.891455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 19:25:39.891526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 19:25:39.891540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 19:25:39.891547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 19:25:39.891553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 19:25:39.891559 | orchestrator | 2025-08-29 19:25:39.891564 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-08-29 19:25:39.891570 | orchestrator | Friday 29 August 2025 19:25:25 +0000 (0:00:01.980) 0:00:11.197 ********* 2025-08-29 19:25:39.891576 | orchestrator | 2025-08-29 19:25:39.891581 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-08-29 19:25:39.891591 | orchestrator | Friday 29 August 2025 19:25:25 +0000 (0:00:00.231) 0:00:11.431 ********* 2025-08-29 19:25:39.891597 | orchestrator | 2025-08-29 19:25:39.891603 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-08-29 19:25:39.891609 | orchestrator | Friday 29 August 2025 19:25:25 +0000 (0:00:00.207) 0:00:11.639 ********* 2025-08-29 19:25:39.891614 | orchestrator | 2025-08-29 19:25:39.891620 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-08-29 19:25:39.891626 | orchestrator | Friday 29 August 2025 19:25:25 +0000 (0:00:00.257) 0:00:11.896 ********* 2025-08-29 19:25:39.891631 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:25:39.891637 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:25:39.891643 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:25:39.891649 | orchestrator | 2025-08-29 19:25:39.891655 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-08-29 19:25:39.891660 | orchestrator | Friday 29 August 2025 19:25:33 +0000 (0:00:08.131) 0:00:20.028 ********* 2025-08-29 19:25:39.891666 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:25:39.891672 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:25:39.891677 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:25:39.891686 | orchestrator | 2025-08-29 19:25:39.891691 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:25:39.891696 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:25:39.891702 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:25:39.891707 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:25:39.891712 | orchestrator | 2025-08-29 19:25:39.891717 | orchestrator | 2025-08-29 19:25:39.891722 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:25:39.891730 | orchestrator | Friday 29 August 2025 19:25:38 +0000 (0:00:05.033) 0:00:25.062 ********* 2025-08-29 19:25:39.891736 | orchestrator | =============================================================================== 2025-08-29 19:25:39.891741 | orchestrator | redis : Restart redis container ----------------------------------------- 8.13s 2025-08-29 19:25:39.891746 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 5.03s 2025-08-29 19:25:39.891751 | orchestrator | redis : Copying over default config.json files -------------------------- 3.19s 2025-08-29 19:25:39.891756 | orchestrator | redis : Copying over redis config files --------------------------------- 2.78s 2025-08-29 19:25:39.891761 | orchestrator | redis : Check redis containers ------------------------------------------ 1.98s 2025-08-29 19:25:39.891766 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.33s 2025-08-29 19:25:39.891771 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.70s 2025-08-29 19:25:39.891776 | orchestrator | redis : include_tasks --------------------------------------------------- 0.67s 2025-08-29 19:25:39.891781 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.50s 2025-08-29 19:25:39.891786 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.39s 2025-08-29 19:25:39.891791 | orchestrator | 2025-08-29 19:25:39 | INFO  | Task 6a2d30f8-1140-4c7a-b933-be54ac6d5b1f is in state SUCCESS 2025-08-29 19:25:39.891797 | orchestrator | 2025-08-29 19:25:39 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:25:39.891802 | orchestrator | 2025-08-29 19:25:39 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:25:39.891807 | orchestrator | 2025-08-29 19:25:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:25:42.938628 | orchestrator | 2025-08-29 19:25:42 | INFO  | Task bec406b8-4005-4a6f-95c6-b5856b4c80b2 is in state STARTED 2025-08-29 19:25:42.938724 | orchestrator | 2025-08-29 19:25:42 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:25:42.938736 | orchestrator | 2025-08-29 19:25:42 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:25:42.938746 | orchestrator | 2025-08-29 19:25:42 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:25:42.938755 | orchestrator | 2025-08-29 19:25:42 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:25:42.938765 | orchestrator | 2025-08-29 19:25:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:25:46.009761 | orchestrator | 2025-08-29 19:25:46 | INFO  | Task bec406b8-4005-4a6f-95c6-b5856b4c80b2 is in state STARTED 2025-08-29 19:25:46.012299 | orchestrator | 2025-08-29 19:25:46 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:25:46.015008 | orchestrator | 2025-08-29 19:25:46 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:25:46.016434 | orchestrator | 2025-08-29 19:25:46 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:25:46.019552 | orchestrator | 2025-08-29 19:25:46 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:25:46.019628 | orchestrator | 2025-08-29 19:25:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:25:49.148970 | orchestrator | 2025-08-29 19:25:49 | INFO  | Task bec406b8-4005-4a6f-95c6-b5856b4c80b2 is in state STARTED 2025-08-29 19:25:49.149065 | orchestrator | 2025-08-29 19:25:49 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:25:49.149079 | orchestrator | 2025-08-29 19:25:49 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:25:49.149089 | orchestrator | 2025-08-29 19:25:49 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:25:49.149099 | orchestrator | 2025-08-29 19:25:49 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:25:49.149109 | orchestrator | 2025-08-29 19:25:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:25:52.206876 | orchestrator | 2025-08-29 19:25:52 | INFO  | Task bec406b8-4005-4a6f-95c6-b5856b4c80b2 is in state STARTED 2025-08-29 19:25:52.206974 | orchestrator | 2025-08-29 19:25:52 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:25:52.206984 | orchestrator | 2025-08-29 19:25:52 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:25:52.206989 | orchestrator | 2025-08-29 19:25:52 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:25:52.207011 | orchestrator | 2025-08-29 19:25:52 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:25:52.207017 | orchestrator | 2025-08-29 19:25:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:25:55.335300 | orchestrator | 2025-08-29 19:25:55 | INFO  | Task bec406b8-4005-4a6f-95c6-b5856b4c80b2 is in state STARTED 2025-08-29 19:25:55.335725 | orchestrator | 2025-08-29 19:25:55 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:25:55.337765 | orchestrator | 2025-08-29 19:25:55 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:25:55.338147 | orchestrator | 2025-08-29 19:25:55 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:25:55.338653 | orchestrator | 2025-08-29 19:25:55 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:25:55.338677 | orchestrator | 2025-08-29 19:25:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:25:58.468972 | orchestrator | 2025-08-29 19:25:58 | INFO  | Task bec406b8-4005-4a6f-95c6-b5856b4c80b2 is in state STARTED 2025-08-29 19:25:58.469083 | orchestrator | 2025-08-29 19:25:58 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:25:58.469648 | orchestrator | 2025-08-29 19:25:58 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:25:58.470284 | orchestrator | 2025-08-29 19:25:58 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:25:58.472770 | orchestrator | 2025-08-29 19:25:58 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:25:58.472815 | orchestrator | 2025-08-29 19:25:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:26:01.560118 | orchestrator | 2025-08-29 19:26:01 | INFO  | Task bec406b8-4005-4a6f-95c6-b5856b4c80b2 is in state STARTED 2025-08-29 19:26:01.560232 | orchestrator | 2025-08-29 19:26:01 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:26:01.560961 | orchestrator | 2025-08-29 19:26:01 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state STARTED 2025-08-29 19:26:01.561761 | orchestrator | 2025-08-29 19:26:01 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:26:01.562605 | orchestrator | 2025-08-29 19:26:01 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:26:01.562784 | orchestrator | 2025-08-29 19:26:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:26:04.622634 | orchestrator | 2025-08-29 19:26:04 | INFO  | Task bec406b8-4005-4a6f-95c6-b5856b4c80b2 is in state STARTED 2025-08-29 19:26:04.623296 | orchestrator | 2025-08-29 19:26:04 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:26:04.626260 | orchestrator | 2025-08-29 19:26:04 | INFO  | Task 7e6a5b46-07ad-4081-85ee-78fd103b6173 is in state SUCCESS 2025-08-29 19:26:04.630328 | orchestrator | 2025-08-29 19:26:04.630415 | orchestrator | 2025-08-29 19:26:04.630474 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-08-29 19:26:04.630497 | orchestrator | 2025-08-29 19:26:04.630517 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-08-29 19:26:04.630532 | orchestrator | Friday 29 August 2025 19:22:21 +0000 (0:00:00.247) 0:00:00.247 ********* 2025-08-29 19:26:04.630544 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:26:04.630557 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:26:04.630568 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:26:04.630579 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:26:04.630590 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:26:04.630601 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:26:04.630612 | orchestrator | 2025-08-29 19:26:04.630624 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-08-29 19:26:04.630636 | orchestrator | Friday 29 August 2025 19:22:22 +0000 (0:00:00.731) 0:00:00.979 ********* 2025-08-29 19:26:04.630647 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:26:04.630659 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:26:04.630670 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:26:04.630680 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.630691 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:26:04.630702 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:26:04.630713 | orchestrator | 2025-08-29 19:26:04.630723 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-08-29 19:26:04.630734 | orchestrator | Friday 29 August 2025 19:22:22 +0000 (0:00:00.540) 0:00:01.519 ********* 2025-08-29 19:26:04.630745 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:26:04.630756 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:26:04.630767 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:26:04.630777 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.630788 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:26:04.630799 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:26:04.630809 | orchestrator | 2025-08-29 19:26:04.630820 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-08-29 19:26:04.630831 | orchestrator | Friday 29 August 2025 19:22:23 +0000 (0:00:00.730) 0:00:02.249 ********* 2025-08-29 19:26:04.630844 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:26:04.630857 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:26:04.630869 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:26:04.630881 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:26:04.630914 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:26:04.630926 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:26:04.630938 | orchestrator | 2025-08-29 19:26:04.630951 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-08-29 19:26:04.630964 | orchestrator | Friday 29 August 2025 19:22:25 +0000 (0:00:01.738) 0:00:03.987 ********* 2025-08-29 19:26:04.631003 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:26:04.631016 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:26:04.631029 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:26:04.631043 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:26:04.631055 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:26:04.631067 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:26:04.631080 | orchestrator | 2025-08-29 19:26:04.631091 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-08-29 19:26:04.631102 | orchestrator | Friday 29 August 2025 19:22:25 +0000 (0:00:00.922) 0:00:04.910 ********* 2025-08-29 19:26:04.631113 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:26:04.631123 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:26:04.631134 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:26:04.631145 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:26:04.631156 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:26:04.631166 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:26:04.631177 | orchestrator | 2025-08-29 19:26:04.631188 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-08-29 19:26:04.631199 | orchestrator | Friday 29 August 2025 19:22:27 +0000 (0:00:01.309) 0:00:06.220 ********* 2025-08-29 19:26:04.631210 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:26:04.631221 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:26:04.631232 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:26:04.631242 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.631253 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:26:04.631264 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:26:04.631274 | orchestrator | 2025-08-29 19:26:04.631285 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-08-29 19:26:04.631296 | orchestrator | Friday 29 August 2025 19:22:27 +0000 (0:00:00.570) 0:00:06.790 ********* 2025-08-29 19:26:04.631307 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:26:04.631318 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:26:04.631328 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:26:04.631339 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.631349 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:26:04.631360 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:26:04.631371 | orchestrator | 2025-08-29 19:26:04.631382 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-08-29 19:26:04.631392 | orchestrator | Friday 29 August 2025 19:22:29 +0000 (0:00:01.264) 0:00:08.054 ********* 2025-08-29 19:26:04.631403 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 19:26:04.631414 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 19:26:04.631425 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:26:04.631436 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 19:26:04.631469 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 19:26:04.631480 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:26:04.631491 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 19:26:04.631502 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 19:26:04.631512 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 19:26:04.631561 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 19:26:04.631595 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:26:04.631607 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 19:26:04.631618 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 19:26:04.631629 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.631640 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:26:04.631651 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 19:26:04.631670 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 19:26:04.631681 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:26:04.631691 | orchestrator | 2025-08-29 19:26:04.631703 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-08-29 19:26:04.631713 | orchestrator | Friday 29 August 2025 19:22:29 +0000 (0:00:00.723) 0:00:08.778 ********* 2025-08-29 19:26:04.631724 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:26:04.631735 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:26:04.631746 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:26:04.631757 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.631768 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:26:04.631779 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:26:04.631789 | orchestrator | 2025-08-29 19:26:04.631801 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-08-29 19:26:04.631813 | orchestrator | Friday 29 August 2025 19:22:31 +0000 (0:00:01.578) 0:00:10.357 ********* 2025-08-29 19:26:04.631824 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:26:04.631835 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:26:04.631846 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:26:04.631857 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:26:04.631867 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:26:04.631878 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:26:04.631889 | orchestrator | 2025-08-29 19:26:04.631900 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-08-29 19:26:04.631911 | orchestrator | Friday 29 August 2025 19:22:32 +0000 (0:00:01.180) 0:00:11.537 ********* 2025-08-29 19:26:04.631921 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:26:04.631932 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:26:04.631943 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:26:04.631960 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:26:04.631971 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:26:04.631982 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:26:04.631993 | orchestrator | 2025-08-29 19:26:04.632004 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-08-29 19:26:04.632015 | orchestrator | Friday 29 August 2025 19:22:38 +0000 (0:00:06.010) 0:00:17.547 ********* 2025-08-29 19:26:04.632025 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:26:04.632036 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:26:04.632047 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:26:04.632057 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.632068 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:26:04.632079 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:26:04.632089 | orchestrator | 2025-08-29 19:26:04.632100 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-08-29 19:26:04.632111 | orchestrator | Friday 29 August 2025 19:22:39 +0000 (0:00:01.347) 0:00:18.895 ********* 2025-08-29 19:26:04.632122 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:26:04.632133 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:26:04.632143 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.632154 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:26:04.632165 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:26:04.632176 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:26:04.632186 | orchestrator | 2025-08-29 19:26:04.632197 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-08-29 19:26:04.632210 | orchestrator | Friday 29 August 2025 19:22:41 +0000 (0:00:01.597) 0:00:20.492 ********* 2025-08-29 19:26:04.632221 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:26:04.632232 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:26:04.632243 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:26:04.632254 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:26:04.632264 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:26:04.632282 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:26:04.632293 | orchestrator | 2025-08-29 19:26:04.632304 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-08-29 19:26:04.632315 | orchestrator | Friday 29 August 2025 19:22:42 +0000 (0:00:01.343) 0:00:21.836 ********* 2025-08-29 19:26:04.632326 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-08-29 19:26:04.632337 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-08-29 19:26:04.632348 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-08-29 19:26:04.632359 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-08-29 19:26:04.632370 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-08-29 19:26:04.632381 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-08-29 19:26:04.632391 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-08-29 19:26:04.632402 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-08-29 19:26:04.632413 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-08-29 19:26:04.632423 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-08-29 19:26:04.632434 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-08-29 19:26:04.632488 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-08-29 19:26:04.632500 | orchestrator | 2025-08-29 19:26:04.632511 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-08-29 19:26:04.632522 | orchestrator | Friday 29 August 2025 19:22:45 +0000 (0:00:02.673) 0:00:24.510 ********* 2025-08-29 19:26:04.632533 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:26:04.632544 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:26:04.632555 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:26:04.632566 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:26:04.632577 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:26:04.632587 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:26:04.632598 | orchestrator | 2025-08-29 19:26:04.632617 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-08-29 19:26:04.632629 | orchestrator | 2025-08-29 19:26:04.632640 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-08-29 19:26:04.632651 | orchestrator | Friday 29 August 2025 19:22:47 +0000 (0:00:01.532) 0:00:26.042 ********* 2025-08-29 19:26:04.632662 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:26:04.632673 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:26:04.632684 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:26:04.632695 | orchestrator | 2025-08-29 19:26:04.632706 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-08-29 19:26:04.632717 | orchestrator | Friday 29 August 2025 19:22:48 +0000 (0:00:01.012) 0:00:27.054 ********* 2025-08-29 19:26:04.632728 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:26:04.632739 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:26:04.632750 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:26:04.632760 | orchestrator | 2025-08-29 19:26:04.632771 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-08-29 19:26:04.632782 | orchestrator | Friday 29 August 2025 19:22:49 +0000 (0:00:01.190) 0:00:28.245 ********* 2025-08-29 19:26:04.632793 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:26:04.632804 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:26:04.632814 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:26:04.632825 | orchestrator | 2025-08-29 19:26:04.632836 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-08-29 19:26:04.632847 | orchestrator | Friday 29 August 2025 19:22:50 +0000 (0:00:00.829) 0:00:29.074 ********* 2025-08-29 19:26:04.632858 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:26:04.632868 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:26:04.632879 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:26:04.632890 | orchestrator | 2025-08-29 19:26:04.632901 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-08-29 19:26:04.632912 | orchestrator | Friday 29 August 2025 19:22:51 +0000 (0:00:00.980) 0:00:30.055 ********* 2025-08-29 19:26:04.632930 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.632942 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:26:04.632953 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:26:04.632964 | orchestrator | 2025-08-29 19:26:04.632975 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-08-29 19:26:04.632986 | orchestrator | Friday 29 August 2025 19:22:52 +0000 (0:00:01.064) 0:00:31.120 ********* 2025-08-29 19:26:04.633002 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:26:04.633014 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:26:04.633024 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:26:04.633050 | orchestrator | 2025-08-29 19:26:04.633073 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-08-29 19:26:04.633084 | orchestrator | Friday 29 August 2025 19:22:53 +0000 (0:00:01.101) 0:00:32.221 ********* 2025-08-29 19:26:04.633095 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:26:04.633106 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:26:04.633117 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:26:04.633128 | orchestrator | 2025-08-29 19:26:04.633139 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-08-29 19:26:04.633150 | orchestrator | Friday 29 August 2025 19:22:55 +0000 (0:00:01.720) 0:00:33.942 ********* 2025-08-29 19:26:04.633161 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:26:04.633172 | orchestrator | 2025-08-29 19:26:04.633183 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-08-29 19:26:04.633194 | orchestrator | Friday 29 August 2025 19:22:55 +0000 (0:00:00.566) 0:00:34.509 ********* 2025-08-29 19:26:04.633205 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:26:04.633216 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:26:04.633227 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:26:04.633238 | orchestrator | 2025-08-29 19:26:04.633249 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-08-29 19:26:04.633260 | orchestrator | Friday 29 August 2025 19:22:57 +0000 (0:00:02.173) 0:00:36.683 ********* 2025-08-29 19:26:04.633271 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:26:04.633282 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:26:04.633293 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:26:04.633303 | orchestrator | 2025-08-29 19:26:04.633314 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-08-29 19:26:04.633325 | orchestrator | Friday 29 August 2025 19:22:58 +0000 (0:00:00.558) 0:00:37.241 ********* 2025-08-29 19:26:04.633336 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:26:04.633347 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:26:04.633358 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:26:04.633369 | orchestrator | 2025-08-29 19:26:04.633380 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-08-29 19:26:04.633391 | orchestrator | Friday 29 August 2025 19:22:59 +0000 (0:00:00.784) 0:00:38.026 ********* 2025-08-29 19:26:04.633402 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:26:04.633413 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:26:04.633424 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:26:04.633435 | orchestrator | 2025-08-29 19:26:04.633766 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-08-29 19:26:04.633827 | orchestrator | Friday 29 August 2025 19:23:00 +0000 (0:00:01.748) 0:00:39.775 ********* 2025-08-29 19:26:04.633850 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.633871 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:26:04.633892 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:26:04.633910 | orchestrator | 2025-08-29 19:26:04.633926 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-08-29 19:26:04.633938 | orchestrator | Friday 29 August 2025 19:23:01 +0000 (0:00:00.752) 0:00:40.527 ********* 2025-08-29 19:26:04.633949 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.633993 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:26:04.634006 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:26:04.634094 | orchestrator | 2025-08-29 19:26:04.634111 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-08-29 19:26:04.634123 | orchestrator | Friday 29 August 2025 19:23:02 +0000 (0:00:00.511) 0:00:41.038 ********* 2025-08-29 19:26:04.634135 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:26:04.634147 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:26:04.634158 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:26:04.634169 | orchestrator | 2025-08-29 19:26:04.634219 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-08-29 19:26:04.634242 | orchestrator | Friday 29 August 2025 19:23:04 +0000 (0:00:02.275) 0:00:43.314 ********* 2025-08-29 19:26:04.634260 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-08-29 19:26:04.634282 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-08-29 19:26:04.634302 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-08-29 19:26:04.634321 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-08-29 19:26:04.634340 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-08-29 19:26:04.634353 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-08-29 19:26:04.634364 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-08-29 19:26:04.634375 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-08-29 19:26:04.634386 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-08-29 19:26:04.634397 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-08-29 19:26:04.634413 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-08-29 19:26:04.634431 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-08-29 19:26:04.634501 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-08-29 19:26:04.634523 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-08-29 19:26:04.634543 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:26:04.634565 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:26:04.634583 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:26:04.634602 | orchestrator | 2025-08-29 19:26:04.634614 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-08-29 19:26:04.634625 | orchestrator | Friday 29 August 2025 19:23:59 +0000 (0:00:55.114) 0:01:38.428 ********* 2025-08-29 19:26:04.634636 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.634647 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:26:04.634658 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:26:04.634669 | orchestrator | 2025-08-29 19:26:04.634681 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-08-29 19:26:04.634700 | orchestrator | Friday 29 August 2025 19:23:59 +0000 (0:00:00.333) 0:01:38.762 ********* 2025-08-29 19:26:04.634733 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:26:04.634753 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:26:04.634771 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:26:04.634790 | orchestrator | 2025-08-29 19:26:04.634808 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-08-29 19:26:04.634827 | orchestrator | Friday 29 August 2025 19:24:00 +0000 (0:00:01.099) 0:01:39.861 ********* 2025-08-29 19:26:04.634845 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:26:04.634864 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:26:04.634883 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:26:04.634901 | orchestrator | 2025-08-29 19:26:04.634920 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-08-29 19:26:04.634939 | orchestrator | Friday 29 August 2025 19:24:02 +0000 (0:00:01.107) 0:01:40.969 ********* 2025-08-29 19:26:04.634958 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:26:04.634976 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:26:04.634989 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:26:04.635000 | orchestrator | 2025-08-29 19:26:04.635011 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-08-29 19:26:04.635021 | orchestrator | Friday 29 August 2025 19:24:27 +0000 (0:00:25.530) 0:02:06.500 ********* 2025-08-29 19:26:04.635032 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:26:04.635043 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:26:04.635053 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:26:04.635064 | orchestrator | 2025-08-29 19:26:04.635074 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-08-29 19:26:04.635085 | orchestrator | Friday 29 August 2025 19:24:28 +0000 (0:00:00.827) 0:02:07.327 ********* 2025-08-29 19:26:04.635096 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:26:04.635107 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:26:04.635118 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:26:04.635128 | orchestrator | 2025-08-29 19:26:04.635139 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-08-29 19:26:04.635150 | orchestrator | Friday 29 August 2025 19:24:29 +0000 (0:00:00.718) 0:02:08.045 ********* 2025-08-29 19:26:04.635171 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:26:04.635182 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:26:04.635193 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:26:04.635211 | orchestrator | 2025-08-29 19:26:04.635228 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-08-29 19:26:04.635247 | orchestrator | Friday 29 August 2025 19:24:30 +0000 (0:00:01.039) 0:02:09.085 ********* 2025-08-29 19:26:04.635264 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:26:04.635283 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:26:04.635303 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:26:04.635320 | orchestrator | 2025-08-29 19:26:04.635339 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-08-29 19:26:04.635358 | orchestrator | Friday 29 August 2025 19:24:31 +0000 (0:00:01.051) 0:02:10.137 ********* 2025-08-29 19:26:04.635376 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:26:04.635396 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:26:04.635410 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:26:04.635420 | orchestrator | 2025-08-29 19:26:04.635434 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-08-29 19:26:04.635474 | orchestrator | Friday 29 August 2025 19:24:31 +0000 (0:00:00.353) 0:02:10.490 ********* 2025-08-29 19:26:04.635492 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:26:04.635509 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:26:04.635529 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:26:04.635547 | orchestrator | 2025-08-29 19:26:04.635663 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-08-29 19:26:04.635679 | orchestrator | Friday 29 August 2025 19:24:32 +0000 (0:00:00.675) 0:02:11.166 ********* 2025-08-29 19:26:04.635690 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:26:04.635714 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:26:04.635725 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:26:04.635736 | orchestrator | 2025-08-29 19:26:04.635747 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-08-29 19:26:04.635759 | orchestrator | Friday 29 August 2025 19:24:32 +0000 (0:00:00.729) 0:02:11.895 ********* 2025-08-29 19:26:04.635778 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:26:04.635796 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:26:04.635813 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:26:04.635832 | orchestrator | 2025-08-29 19:26:04.635852 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-08-29 19:26:04.635881 | orchestrator | Friday 29 August 2025 19:24:34 +0000 (0:00:01.433) 0:02:13.329 ********* 2025-08-29 19:26:04.635900 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:26:04.635918 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:26:04.635937 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:26:04.635955 | orchestrator | 2025-08-29 19:26:04.635967 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-08-29 19:26:04.635977 | orchestrator | Friday 29 August 2025 19:24:35 +0000 (0:00:00.766) 0:02:14.095 ********* 2025-08-29 19:26:04.635989 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.635999 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:26:04.636015 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:26:04.636035 | orchestrator | 2025-08-29 19:26:04.636056 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-08-29 19:26:04.636075 | orchestrator | Friday 29 August 2025 19:24:35 +0000 (0:00:00.304) 0:02:14.399 ********* 2025-08-29 19:26:04.636094 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.636106 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:26:04.636116 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:26:04.636128 | orchestrator | 2025-08-29 19:26:04.636138 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-08-29 19:26:04.636149 | orchestrator | Friday 29 August 2025 19:24:35 +0000 (0:00:00.287) 0:02:14.686 ********* 2025-08-29 19:26:04.636161 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:26:04.636180 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:26:04.636198 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:26:04.636217 | orchestrator | 2025-08-29 19:26:04.636239 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-08-29 19:26:04.636260 | orchestrator | Friday 29 August 2025 19:24:36 +0000 (0:00:00.814) 0:02:15.501 ********* 2025-08-29 19:26:04.636281 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:26:04.636299 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:26:04.636316 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:26:04.636334 | orchestrator | 2025-08-29 19:26:04.636352 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-08-29 19:26:04.636372 | orchestrator | Friday 29 August 2025 19:24:37 +0000 (0:00:00.591) 0:02:16.093 ********* 2025-08-29 19:26:04.636391 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-08-29 19:26:04.636410 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-08-29 19:26:04.636428 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-08-29 19:26:04.636541 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-08-29 19:26:04.636554 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-08-29 19:26:04.636565 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-08-29 19:26:04.636576 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-08-29 19:26:04.636588 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-08-29 19:26:04.636611 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-08-29 19:26:04.636623 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-08-29 19:26:04.636634 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-08-29 19:26:04.636658 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-08-29 19:26:04.636670 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-08-29 19:26:04.636681 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-08-29 19:26:04.636691 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-08-29 19:26:04.636702 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-08-29 19:26:04.636713 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-08-29 19:26:04.636724 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-08-29 19:26:04.636735 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-08-29 19:26:04.636745 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-08-29 19:26:04.636756 | orchestrator | 2025-08-29 19:26:04.636767 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-08-29 19:26:04.636778 | orchestrator | 2025-08-29 19:26:04.636789 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-08-29 19:26:04.636800 | orchestrator | Friday 29 August 2025 19:24:40 +0000 (0:00:02.968) 0:02:19.062 ********* 2025-08-29 19:26:04.636811 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:26:04.636822 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:26:04.636832 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:26:04.636843 | orchestrator | 2025-08-29 19:26:04.636854 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-08-29 19:26:04.636864 | orchestrator | Friday 29 August 2025 19:24:40 +0000 (0:00:00.431) 0:02:19.494 ********* 2025-08-29 19:26:04.636875 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:26:04.636886 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:26:04.636896 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:26:04.636907 | orchestrator | 2025-08-29 19:26:04.636933 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-08-29 19:26:04.636944 | orchestrator | Friday 29 August 2025 19:24:41 +0000 (0:00:00.613) 0:02:20.107 ********* 2025-08-29 19:26:04.636955 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:26:04.636966 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:26:04.636992 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:26:04.637015 | orchestrator | 2025-08-29 19:26:04.637026 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-08-29 19:26:04.637037 | orchestrator | Friday 29 August 2025 19:24:41 +0000 (0:00:00.275) 0:02:20.383 ********* 2025-08-29 19:26:04.637047 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:26:04.637061 | orchestrator | 2025-08-29 19:26:04.637078 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-08-29 19:26:04.637092 | orchestrator | Friday 29 August 2025 19:24:42 +0000 (0:00:00.543) 0:02:20.927 ********* 2025-08-29 19:26:04.637109 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:26:04.637125 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:26:04.637141 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:26:04.637158 | orchestrator | 2025-08-29 19:26:04.637175 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-08-29 19:26:04.637191 | orchestrator | Friday 29 August 2025 19:24:42 +0000 (0:00:00.291) 0:02:21.218 ********* 2025-08-29 19:26:04.637276 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:26:04.637297 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:26:04.637313 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:26:04.637330 | orchestrator | 2025-08-29 19:26:04.637348 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-08-29 19:26:04.637366 | orchestrator | Friday 29 August 2025 19:24:42 +0000 (0:00:00.272) 0:02:21.490 ********* 2025-08-29 19:26:04.637382 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:26:04.637395 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:26:04.637405 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:26:04.637414 | orchestrator | 2025-08-29 19:26:04.637424 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-08-29 19:26:04.637434 | orchestrator | Friday 29 August 2025 19:24:42 +0000 (0:00:00.266) 0:02:21.757 ********* 2025-08-29 19:26:04.637469 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:26:04.637479 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:26:04.637488 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:26:04.637498 | orchestrator | 2025-08-29 19:26:04.637508 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-08-29 19:26:04.637517 | orchestrator | Friday 29 August 2025 19:24:43 +0000 (0:00:00.675) 0:02:22.433 ********* 2025-08-29 19:26:04.637527 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:26:04.637536 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:26:04.637546 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:26:04.637555 | orchestrator | 2025-08-29 19:26:04.637565 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-08-29 19:26:04.637575 | orchestrator | Friday 29 August 2025 19:24:44 +0000 (0:00:01.088) 0:02:23.521 ********* 2025-08-29 19:26:04.637584 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:26:04.637594 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:26:04.637604 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:26:04.637613 | orchestrator | 2025-08-29 19:26:04.637623 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-08-29 19:26:04.637632 | orchestrator | Friday 29 August 2025 19:24:45 +0000 (0:00:01.196) 0:02:24.718 ********* 2025-08-29 19:26:04.637642 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:26:04.637652 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:26:04.637661 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:26:04.637671 | orchestrator | 2025-08-29 19:26:04.637681 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-08-29 19:26:04.637690 | orchestrator | 2025-08-29 19:26:04.637712 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-08-29 19:26:04.637723 | orchestrator | Friday 29 August 2025 19:24:58 +0000 (0:00:12.440) 0:02:37.159 ********* 2025-08-29 19:26:04.637732 | orchestrator | ok: [testbed-manager] 2025-08-29 19:26:04.637742 | orchestrator | 2025-08-29 19:26:04.637751 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-08-29 19:26:04.637761 | orchestrator | Friday 29 August 2025 19:24:59 +0000 (0:00:01.561) 0:02:38.720 ********* 2025-08-29 19:26:04.637770 | orchestrator | changed: [testbed-manager] 2025-08-29 19:26:04.637781 | orchestrator | 2025-08-29 19:26:04.637799 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-08-29 19:26:04.637815 | orchestrator | Friday 29 August 2025 19:25:00 +0000 (0:00:00.438) 0:02:39.159 ********* 2025-08-29 19:26:04.637831 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-08-29 19:26:04.637847 | orchestrator | 2025-08-29 19:26:04.637864 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-08-29 19:26:04.637881 | orchestrator | Friday 29 August 2025 19:25:00 +0000 (0:00:00.545) 0:02:39.704 ********* 2025-08-29 19:26:04.637899 | orchestrator | changed: [testbed-manager] 2025-08-29 19:26:04.637915 | orchestrator | 2025-08-29 19:26:04.637933 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-08-29 19:26:04.637943 | orchestrator | Friday 29 August 2025 19:25:01 +0000 (0:00:00.845) 0:02:40.549 ********* 2025-08-29 19:26:04.637961 | orchestrator | changed: [testbed-manager] 2025-08-29 19:26:04.637971 | orchestrator | 2025-08-29 19:26:04.637981 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-08-29 19:26:04.637991 | orchestrator | Friday 29 August 2025 19:25:02 +0000 (0:00:00.565) 0:02:41.114 ********* 2025-08-29 19:26:04.638000 | orchestrator | changed: [testbed-manager -> localhost] 2025-08-29 19:26:04.638010 | orchestrator | 2025-08-29 19:26:04.638055 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-08-29 19:26:04.638071 | orchestrator | Friday 29 August 2025 19:25:03 +0000 (0:00:01.560) 0:02:42.675 ********* 2025-08-29 19:26:04.638087 | orchestrator | changed: [testbed-manager -> localhost] 2025-08-29 19:26:04.638103 | orchestrator | 2025-08-29 19:26:04.638119 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-08-29 19:26:04.638135 | orchestrator | Friday 29 August 2025 19:25:04 +0000 (0:00:00.841) 0:02:43.517 ********* 2025-08-29 19:26:04.638160 | orchestrator | changed: [testbed-manager] 2025-08-29 19:26:04.638176 | orchestrator | 2025-08-29 19:26:04.638192 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-08-29 19:26:04.638210 | orchestrator | Friday 29 August 2025 19:25:05 +0000 (0:00:00.500) 0:02:44.017 ********* 2025-08-29 19:26:04.638227 | orchestrator | changed: [testbed-manager] 2025-08-29 19:26:04.638244 | orchestrator | 2025-08-29 19:26:04.638262 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-08-29 19:26:04.638278 | orchestrator | 2025-08-29 19:26:04.638296 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-08-29 19:26:04.638313 | orchestrator | Friday 29 August 2025 19:25:05 +0000 (0:00:00.746) 0:02:44.764 ********* 2025-08-29 19:26:04.638330 | orchestrator | ok: [testbed-manager] 2025-08-29 19:26:04.638347 | orchestrator | 2025-08-29 19:26:04.638365 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-08-29 19:26:04.638382 | orchestrator | Friday 29 August 2025 19:25:05 +0000 (0:00:00.148) 0:02:44.912 ********* 2025-08-29 19:26:04.638394 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 19:26:04.638404 | orchestrator | 2025-08-29 19:26:04.638415 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-08-29 19:26:04.638431 | orchestrator | Friday 29 August 2025 19:25:06 +0000 (0:00:00.243) 0:02:45.156 ********* 2025-08-29 19:26:04.638468 | orchestrator | ok: [testbed-manager] 2025-08-29 19:26:04.638484 | orchestrator | 2025-08-29 19:26:04.638500 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-08-29 19:26:04.638516 | orchestrator | Friday 29 August 2025 19:25:07 +0000 (0:00:00.920) 0:02:46.076 ********* 2025-08-29 19:26:04.638532 | orchestrator | ok: [testbed-manager] 2025-08-29 19:26:04.638549 | orchestrator | 2025-08-29 19:26:04.638565 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-08-29 19:26:04.638581 | orchestrator | Friday 29 August 2025 19:25:08 +0000 (0:00:01.846) 0:02:47.922 ********* 2025-08-29 19:26:04.638597 | orchestrator | changed: [testbed-manager] 2025-08-29 19:26:04.638613 | orchestrator | 2025-08-29 19:26:04.638627 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-08-29 19:26:04.638642 | orchestrator | Friday 29 August 2025 19:25:09 +0000 (0:00:00.726) 0:02:48.649 ********* 2025-08-29 19:26:04.638658 | orchestrator | ok: [testbed-manager] 2025-08-29 19:26:04.638674 | orchestrator | 2025-08-29 19:26:04.638689 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-08-29 19:26:04.638706 | orchestrator | Friday 29 August 2025 19:25:10 +0000 (0:00:00.549) 0:02:49.199 ********* 2025-08-29 19:26:04.638722 | orchestrator | changed: [testbed-manager] 2025-08-29 19:26:04.638738 | orchestrator | 2025-08-29 19:26:04.638755 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-08-29 19:26:04.638770 | orchestrator | Friday 29 August 2025 19:25:16 +0000 (0:00:06.349) 0:02:55.548 ********* 2025-08-29 19:26:04.638786 | orchestrator | changed: [testbed-manager] 2025-08-29 19:26:04.638816 | orchestrator | 2025-08-29 19:26:04.638832 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-08-29 19:26:04.638849 | orchestrator | Friday 29 August 2025 19:25:29 +0000 (0:00:12.506) 0:03:08.055 ********* 2025-08-29 19:26:04.638864 | orchestrator | ok: [testbed-manager] 2025-08-29 19:26:04.638881 | orchestrator | 2025-08-29 19:26:04.638898 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-08-29 19:26:04.638914 | orchestrator | 2025-08-29 19:26:04.638932 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-08-29 19:26:04.638948 | orchestrator | Friday 29 August 2025 19:25:29 +0000 (0:00:00.549) 0:03:08.605 ********* 2025-08-29 19:26:04.638964 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:26:04.638978 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:26:04.638987 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:26:04.638997 | orchestrator | 2025-08-29 19:26:04.639019 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-08-29 19:26:04.639029 | orchestrator | Friday 29 August 2025 19:25:29 +0000 (0:00:00.301) 0:03:08.906 ********* 2025-08-29 19:26:04.639038 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.639048 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:26:04.639057 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:26:04.639067 | orchestrator | 2025-08-29 19:26:04.639076 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-08-29 19:26:04.639086 | orchestrator | Friday 29 August 2025 19:25:30 +0000 (0:00:00.339) 0:03:09.246 ********* 2025-08-29 19:26:04.639095 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:26:04.639105 | orchestrator | 2025-08-29 19:26:04.639115 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-08-29 19:26:04.639124 | orchestrator | Friday 29 August 2025 19:25:31 +0000 (0:00:00.768) 0:03:10.015 ********* 2025-08-29 19:26:04.639133 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.639143 | orchestrator | 2025-08-29 19:26:04.639152 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-08-29 19:26:04.639162 | orchestrator | Friday 29 August 2025 19:25:31 +0000 (0:00:00.181) 0:03:10.196 ********* 2025-08-29 19:26:04.639171 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.639180 | orchestrator | 2025-08-29 19:26:04.639190 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-08-29 19:26:04.639199 | orchestrator | Friday 29 August 2025 19:25:31 +0000 (0:00:00.207) 0:03:10.403 ********* 2025-08-29 19:26:04.639208 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.639218 | orchestrator | 2025-08-29 19:26:04.639227 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-08-29 19:26:04.639237 | orchestrator | Friday 29 August 2025 19:25:31 +0000 (0:00:00.191) 0:03:10.595 ********* 2025-08-29 19:26:04.639247 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.639256 | orchestrator | 2025-08-29 19:26:04.639265 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-08-29 19:26:04.639275 | orchestrator | Friday 29 August 2025 19:25:31 +0000 (0:00:00.193) 0:03:10.789 ********* 2025-08-29 19:26:04.639284 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.639294 | orchestrator | 2025-08-29 19:26:04.639311 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-08-29 19:26:04.639321 | orchestrator | Friday 29 August 2025 19:25:32 +0000 (0:00:00.194) 0:03:10.983 ********* 2025-08-29 19:26:04.639330 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.639340 | orchestrator | 2025-08-29 19:26:04.639349 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-08-29 19:26:04.639358 | orchestrator | Friday 29 August 2025 19:25:32 +0000 (0:00:00.195) 0:03:11.179 ********* 2025-08-29 19:26:04.639368 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.639377 | orchestrator | 2025-08-29 19:26:04.639387 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-08-29 19:26:04.639406 | orchestrator | Friday 29 August 2025 19:25:32 +0000 (0:00:00.184) 0:03:11.363 ********* 2025-08-29 19:26:04.639415 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.639425 | orchestrator | 2025-08-29 19:26:04.639434 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-08-29 19:26:04.639483 | orchestrator | Friday 29 August 2025 19:25:32 +0000 (0:00:00.172) 0:03:11.536 ********* 2025-08-29 19:26:04.639500 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.639515 | orchestrator | 2025-08-29 19:26:04.639529 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-08-29 19:26:04.639539 | orchestrator | Friday 29 August 2025 19:25:32 +0000 (0:00:00.223) 0:03:11.759 ********* 2025-08-29 19:26:04.639549 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-08-29 19:26:04.639558 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-08-29 19:26:04.639568 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.639577 | orchestrator | 2025-08-29 19:26:04.639587 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-08-29 19:26:04.639596 | orchestrator | Friday 29 August 2025 19:25:33 +0000 (0:00:00.787) 0:03:12.546 ********* 2025-08-29 19:26:04.639606 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.639615 | orchestrator | 2025-08-29 19:26:04.639624 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-08-29 19:26:04.639634 | orchestrator | Friday 29 August 2025 19:25:33 +0000 (0:00:00.215) 0:03:12.761 ********* 2025-08-29 19:26:04.639643 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.639653 | orchestrator | 2025-08-29 19:26:04.639662 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-08-29 19:26:04.639672 | orchestrator | Friday 29 August 2025 19:25:34 +0000 (0:00:00.211) 0:03:12.972 ********* 2025-08-29 19:26:04.639681 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.639691 | orchestrator | 2025-08-29 19:26:04.639700 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-08-29 19:26:04.639709 | orchestrator | Friday 29 August 2025 19:25:34 +0000 (0:00:00.205) 0:03:13.178 ********* 2025-08-29 19:26:04.639719 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.639728 | orchestrator | 2025-08-29 19:26:04.639738 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-08-29 19:26:04.639747 | orchestrator | Friday 29 August 2025 19:25:34 +0000 (0:00:00.216) 0:03:13.395 ********* 2025-08-29 19:26:04.639757 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.639766 | orchestrator | 2025-08-29 19:26:04.639776 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-08-29 19:26:04.639785 | orchestrator | Friday 29 August 2025 19:25:34 +0000 (0:00:00.209) 0:03:13.604 ********* 2025-08-29 19:26:04.639795 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.639804 | orchestrator | 2025-08-29 19:26:04.639814 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-08-29 19:26:04.639824 | orchestrator | Friday 29 August 2025 19:25:34 +0000 (0:00:00.254) 0:03:13.858 ********* 2025-08-29 19:26:04.639833 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.639843 | orchestrator | 2025-08-29 19:26:04.639860 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-08-29 19:26:04.639870 | orchestrator | Friday 29 August 2025 19:25:35 +0000 (0:00:00.277) 0:03:14.136 ********* 2025-08-29 19:26:04.639879 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.639889 | orchestrator | 2025-08-29 19:26:04.639899 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-08-29 19:26:04.639908 | orchestrator | Friday 29 August 2025 19:25:35 +0000 (0:00:00.283) 0:03:14.419 ********* 2025-08-29 19:26:04.639918 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.639927 | orchestrator | 2025-08-29 19:26:04.639936 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-08-29 19:26:04.639946 | orchestrator | Friday 29 August 2025 19:25:35 +0000 (0:00:00.230) 0:03:14.650 ********* 2025-08-29 19:26:04.639967 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.639977 | orchestrator | 2025-08-29 19:26:04.639987 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-08-29 19:26:04.639996 | orchestrator | Friday 29 August 2025 19:25:35 +0000 (0:00:00.178) 0:03:14.828 ********* 2025-08-29 19:26:04.640006 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.640015 | orchestrator | 2025-08-29 19:26:04.640024 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-08-29 19:26:04.640034 | orchestrator | Friday 29 August 2025 19:25:36 +0000 (0:00:00.146) 0:03:14.975 ********* 2025-08-29 19:26:04.640044 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-08-29 19:26:04.640053 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-08-29 19:26:04.640063 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-08-29 19:26:04.640073 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-08-29 19:26:04.640082 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.640091 | orchestrator | 2025-08-29 19:26:04.640101 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-08-29 19:26:04.640110 | orchestrator | Friday 29 August 2025 19:25:36 +0000 (0:00:00.746) 0:03:15.722 ********* 2025-08-29 19:26:04.640120 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.640129 | orchestrator | 2025-08-29 19:26:04.640139 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-08-29 19:26:04.640155 | orchestrator | Friday 29 August 2025 19:25:36 +0000 (0:00:00.201) 0:03:15.924 ********* 2025-08-29 19:26:04.640172 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.640188 | orchestrator | 2025-08-29 19:26:04.640203 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-08-29 19:26:04.640219 | orchestrator | Friday 29 August 2025 19:25:37 +0000 (0:00:00.199) 0:03:16.124 ********* 2025-08-29 19:26:04.640234 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.640249 | orchestrator | 2025-08-29 19:26:04.640262 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-08-29 19:26:04.640280 | orchestrator | Friday 29 August 2025 19:25:37 +0000 (0:00:00.179) 0:03:16.303 ********* 2025-08-29 19:26:04.640296 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.640313 | orchestrator | 2025-08-29 19:26:04.640328 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-08-29 19:26:04.640344 | orchestrator | Friday 29 August 2025 19:25:37 +0000 (0:00:00.233) 0:03:16.537 ********* 2025-08-29 19:26:04.640354 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-08-29 19:26:04.640363 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-08-29 19:26:04.640373 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.640383 | orchestrator | 2025-08-29 19:26:04.640392 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-08-29 19:26:04.640402 | orchestrator | Friday 29 August 2025 19:25:37 +0000 (0:00:00.354) 0:03:16.892 ********* 2025-08-29 19:26:04.640412 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.640421 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:26:04.640431 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:26:04.640461 | orchestrator | 2025-08-29 19:26:04.640472 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-08-29 19:26:04.640482 | orchestrator | Friday 29 August 2025 19:25:38 +0000 (0:00:00.431) 0:03:17.323 ********* 2025-08-29 19:26:04.640492 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:26:04.640501 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:26:04.640511 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:26:04.640521 | orchestrator | 2025-08-29 19:26:04.640530 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-08-29 19:26:04.640540 | orchestrator | 2025-08-29 19:26:04.640550 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-08-29 19:26:04.640567 | orchestrator | Friday 29 August 2025 19:25:39 +0000 (0:00:01.111) 0:03:18.435 ********* 2025-08-29 19:26:04.640577 | orchestrator | ok: [testbed-manager] 2025-08-29 19:26:04.640587 | orchestrator | 2025-08-29 19:26:04.640596 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-08-29 19:26:04.640606 | orchestrator | Friday 29 August 2025 19:25:39 +0000 (0:00:00.124) 0:03:18.559 ********* 2025-08-29 19:26:04.640615 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 19:26:04.640630 | orchestrator | 2025-08-29 19:26:04.640647 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-08-29 19:26:04.640665 | orchestrator | Friday 29 August 2025 19:25:39 +0000 (0:00:00.198) 0:03:18.757 ********* 2025-08-29 19:26:04.640680 | orchestrator | changed: [testbed-manager] 2025-08-29 19:26:04.640696 | orchestrator | 2025-08-29 19:26:04.640712 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-08-29 19:26:04.640730 | orchestrator | 2025-08-29 19:26:04.640747 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-08-29 19:26:04.640762 | orchestrator | Friday 29 August 2025 19:25:46 +0000 (0:00:06.290) 0:03:25.048 ********* 2025-08-29 19:26:04.640778 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:26:04.640793 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:26:04.640809 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:26:04.640836 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:26:04.640854 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:26:04.640870 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:26:04.640886 | orchestrator | 2025-08-29 19:26:04.640903 | orchestrator | TASK [Manage labels] *********************************************************** 2025-08-29 19:26:04.640920 | orchestrator | Friday 29 August 2025 19:25:47 +0000 (0:00:00.932) 0:03:25.981 ********* 2025-08-29 19:26:04.640936 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-08-29 19:26:04.640952 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-08-29 19:26:04.640968 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-08-29 19:26:04.640984 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-08-29 19:26:04.641000 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-08-29 19:26:04.641016 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-08-29 19:26:04.641032 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-08-29 19:26:04.641048 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-08-29 19:26:04.641063 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-08-29 19:26:04.641078 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-08-29 19:26:04.641095 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-08-29 19:26:04.641112 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-08-29 19:26:04.641129 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-08-29 19:26:04.641145 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-08-29 19:26:04.641169 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-08-29 19:26:04.641188 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-08-29 19:26:04.641205 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-08-29 19:26:04.641222 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-08-29 19:26:04.641238 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-08-29 19:26:04.641264 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-08-29 19:26:04.641281 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-08-29 19:26:04.641297 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-08-29 19:26:04.641313 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-08-29 19:26:04.641329 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-08-29 19:26:04.641346 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-08-29 19:26:04.641361 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-08-29 19:26:04.641377 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-08-29 19:26:04.641395 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-08-29 19:26:04.641411 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-08-29 19:26:04.641427 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-08-29 19:26:04.641470 | orchestrator | 2025-08-29 19:26:04.641484 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-08-29 19:26:04.641494 | orchestrator | Friday 29 August 2025 19:26:01 +0000 (0:00:14.091) 0:03:40.072 ********* 2025-08-29 19:26:04.641504 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:26:04.641515 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:26:04.641524 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:26:04.641534 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.641544 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:26:04.641553 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:26:04.641563 | orchestrator | 2025-08-29 19:26:04.641573 | orchestrator | TASK [Manage taints] *********************************************************** 2025-08-29 19:26:04.641583 | orchestrator | Friday 29 August 2025 19:26:01 +0000 (0:00:00.642) 0:03:40.714 ********* 2025-08-29 19:26:04.641592 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:26:04.641602 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:26:04.641612 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:26:04.641621 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:04.641631 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:26:04.641641 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:26:04.641650 | orchestrator | 2025-08-29 19:26:04.641660 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:26:04.641670 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:26:04.641692 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-08-29 19:26:04.641703 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-08-29 19:26:04.641713 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-08-29 19:26:04.641723 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-08-29 19:26:04.641732 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-08-29 19:26:04.641742 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-08-29 19:26:04.641761 | orchestrator | 2025-08-29 19:26:04.641778 | orchestrator | 2025-08-29 19:26:04.641795 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:26:04.641811 | orchestrator | Friday 29 August 2025 19:26:02 +0000 (0:00:00.582) 0:03:41.296 ********* 2025-08-29 19:26:04.641826 | orchestrator | =============================================================================== 2025-08-29 19:26:04.641842 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 55.11s 2025-08-29 19:26:04.641858 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.53s 2025-08-29 19:26:04.641875 | orchestrator | Manage labels ---------------------------------------------------------- 14.09s 2025-08-29 19:26:04.641891 | orchestrator | kubectl : Install required packages ------------------------------------ 12.51s 2025-08-29 19:26:04.641908 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 12.44s 2025-08-29 19:26:04.641932 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.35s 2025-08-29 19:26:04.641949 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.29s 2025-08-29 19:26:04.641965 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.01s 2025-08-29 19:26:04.641981 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.97s 2025-08-29 19:26:04.641997 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 2.67s 2025-08-29 19:26:04.642014 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.28s 2025-08-29 19:26:04.642302 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.17s 2025-08-29 19:26:04.642321 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.85s 2025-08-29 19:26:04.642336 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.75s 2025-08-29 19:26:04.642352 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.74s 2025-08-29 19:26:04.642368 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.72s 2025-08-29 19:26:04.642384 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.60s 2025-08-29 19:26:04.642400 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 1.58s 2025-08-29 19:26:04.642418 | orchestrator | Get home directory of operator user ------------------------------------- 1.56s 2025-08-29 19:26:04.642434 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.56s 2025-08-29 19:26:04.642480 | orchestrator | 2025-08-29 19:26:04 | INFO  | Task 7cd33ba3-579e-4262-b485-b1db82855f18 is in state STARTED 2025-08-29 19:26:04.642508 | orchestrator | 2025-08-29 19:26:04 | INFO  | Task 41b65702-3cdf-4eda-aae4-6f655e96f846 is in state STARTED 2025-08-29 19:26:04.642526 | orchestrator | 2025-08-29 19:26:04 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:26:04.643295 | orchestrator | 2025-08-29 19:26:04 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:26:04.643324 | orchestrator | 2025-08-29 19:26:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:26:07.684067 | orchestrator | 2025-08-29 19:26:07 | INFO  | Task bec406b8-4005-4a6f-95c6-b5856b4c80b2 is in state STARTED 2025-08-29 19:26:07.687369 | orchestrator | 2025-08-29 19:26:07 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:26:07.687936 | orchestrator | 2025-08-29 19:26:07 | INFO  | Task 7cd33ba3-579e-4262-b485-b1db82855f18 is in state STARTED 2025-08-29 19:26:07.689009 | orchestrator | 2025-08-29 19:26:07 | INFO  | Task 41b65702-3cdf-4eda-aae4-6f655e96f846 is in state STARTED 2025-08-29 19:26:07.692148 | orchestrator | 2025-08-29 19:26:07 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:26:07.698090 | orchestrator | 2025-08-29 19:26:07 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:26:07.698144 | orchestrator | 2025-08-29 19:26:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:26:10.753520 | orchestrator | 2025-08-29 19:26:10 | INFO  | Task bec406b8-4005-4a6f-95c6-b5856b4c80b2 is in state STARTED 2025-08-29 19:26:10.753636 | orchestrator | 2025-08-29 19:26:10 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:26:10.753652 | orchestrator | 2025-08-29 19:26:10 | INFO  | Task 7cd33ba3-579e-4262-b485-b1db82855f18 is in state STARTED 2025-08-29 19:26:10.753664 | orchestrator | 2025-08-29 19:26:10 | INFO  | Task 41b65702-3cdf-4eda-aae4-6f655e96f846 is in state STARTED 2025-08-29 19:26:10.753675 | orchestrator | 2025-08-29 19:26:10 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:26:10.754252 | orchestrator | 2025-08-29 19:26:10 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:26:10.754293 | orchestrator | 2025-08-29 19:26:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:26:13.791900 | orchestrator | 2025-08-29 19:26:13 | INFO  | Task bec406b8-4005-4a6f-95c6-b5856b4c80b2 is in state STARTED 2025-08-29 19:26:13.791992 | orchestrator | 2025-08-29 19:26:13 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:26:13.792962 | orchestrator | 2025-08-29 19:26:13 | INFO  | Task 7cd33ba3-579e-4262-b485-b1db82855f18 is in state STARTED 2025-08-29 19:26:13.793228 | orchestrator | 2025-08-29 19:26:13 | INFO  | Task 41b65702-3cdf-4eda-aae4-6f655e96f846 is in state SUCCESS 2025-08-29 19:26:13.793658 | orchestrator | 2025-08-29 19:26:13 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:26:13.794423 | orchestrator | 2025-08-29 19:26:13 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:26:13.794473 | orchestrator | 2025-08-29 19:26:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:26:16.869247 | orchestrator | 2025-08-29 19:26:16 | INFO  | Task bec406b8-4005-4a6f-95c6-b5856b4c80b2 is in state STARTED 2025-08-29 19:26:16.869324 | orchestrator | 2025-08-29 19:26:16 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:26:16.869330 | orchestrator | 2025-08-29 19:26:16 | INFO  | Task 7cd33ba3-579e-4262-b485-b1db82855f18 is in state SUCCESS 2025-08-29 19:26:16.869334 | orchestrator | 2025-08-29 19:26:16 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:26:16.869338 | orchestrator | 2025-08-29 19:26:16 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:26:16.869342 | orchestrator | 2025-08-29 19:26:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:26:19.849978 | orchestrator | 2025-08-29 19:26:19 | INFO  | Task bec406b8-4005-4a6f-95c6-b5856b4c80b2 is in state STARTED 2025-08-29 19:26:19.852608 | orchestrator | 2025-08-29 19:26:19 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:26:19.852640 | orchestrator | 2025-08-29 19:26:19 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:26:19.854954 | orchestrator | 2025-08-29 19:26:19 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:26:19.854994 | orchestrator | 2025-08-29 19:26:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:26:22.888898 | orchestrator | 2025-08-29 19:26:22 | INFO  | Task bec406b8-4005-4a6f-95c6-b5856b4c80b2 is in state SUCCESS 2025-08-29 19:26:22.892777 | orchestrator | 2025-08-29 19:26:22.892849 | orchestrator | 2025-08-29 19:26:22.892864 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-08-29 19:26:22.892876 | orchestrator | 2025-08-29 19:26:22.892886 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-08-29 19:26:22.892896 | orchestrator | Friday 29 August 2025 19:26:08 +0000 (0:00:00.364) 0:00:00.364 ********* 2025-08-29 19:26:22.892907 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-08-29 19:26:22.892917 | orchestrator | 2025-08-29 19:26:22.892928 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-08-29 19:26:22.892937 | orchestrator | Friday 29 August 2025 19:26:09 +0000 (0:00:01.005) 0:00:01.369 ********* 2025-08-29 19:26:22.892947 | orchestrator | changed: [testbed-manager] 2025-08-29 19:26:22.892957 | orchestrator | 2025-08-29 19:26:22.892967 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-08-29 19:26:22.892977 | orchestrator | Friday 29 August 2025 19:26:10 +0000 (0:00:01.574) 0:00:02.944 ********* 2025-08-29 19:26:22.892986 | orchestrator | changed: [testbed-manager] 2025-08-29 19:26:22.892996 | orchestrator | 2025-08-29 19:26:22.893005 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:26:22.893015 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:26:22.893026 | orchestrator | 2025-08-29 19:26:22.893036 | orchestrator | 2025-08-29 19:26:22.893046 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:26:22.893055 | orchestrator | Friday 29 August 2025 19:26:11 +0000 (0:00:00.533) 0:00:03.477 ********* 2025-08-29 19:26:22.893065 | orchestrator | =============================================================================== 2025-08-29 19:26:22.893074 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.57s 2025-08-29 19:26:22.893084 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.01s 2025-08-29 19:26:22.893093 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.54s 2025-08-29 19:26:22.893103 | orchestrator | 2025-08-29 19:26:22.893112 | orchestrator | 2025-08-29 19:26:22.893122 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-08-29 19:26:22.893131 | orchestrator | 2025-08-29 19:26:22.893141 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-08-29 19:26:22.893150 | orchestrator | Friday 29 August 2025 19:26:07 +0000 (0:00:00.171) 0:00:00.171 ********* 2025-08-29 19:26:22.893160 | orchestrator | ok: [testbed-manager] 2025-08-29 19:26:22.893170 | orchestrator | 2025-08-29 19:26:22.893180 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-08-29 19:26:22.893189 | orchestrator | Friday 29 August 2025 19:26:08 +0000 (0:00:00.633) 0:00:00.805 ********* 2025-08-29 19:26:22.893199 | orchestrator | ok: [testbed-manager] 2025-08-29 19:26:22.893209 | orchestrator | 2025-08-29 19:26:22.893218 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-08-29 19:26:22.893228 | orchestrator | Friday 29 August 2025 19:26:08 +0000 (0:00:00.625) 0:00:01.430 ********* 2025-08-29 19:26:22.893238 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-08-29 19:26:22.893247 | orchestrator | 2025-08-29 19:26:22.893257 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-08-29 19:26:22.893266 | orchestrator | Friday 29 August 2025 19:26:09 +0000 (0:00:00.820) 0:00:02.251 ********* 2025-08-29 19:26:22.893276 | orchestrator | changed: [testbed-manager] 2025-08-29 19:26:22.893285 | orchestrator | 2025-08-29 19:26:22.893297 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-08-29 19:26:22.893313 | orchestrator | Friday 29 August 2025 19:26:11 +0000 (0:00:01.650) 0:00:03.901 ********* 2025-08-29 19:26:22.893342 | orchestrator | changed: [testbed-manager] 2025-08-29 19:26:22.893360 | orchestrator | 2025-08-29 19:26:22.893378 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-08-29 19:26:22.893414 | orchestrator | Friday 29 August 2025 19:26:12 +0000 (0:00:00.951) 0:00:04.852 ********* 2025-08-29 19:26:22.893458 | orchestrator | changed: [testbed-manager -> localhost] 2025-08-29 19:26:22.893470 | orchestrator | 2025-08-29 19:26:22.893481 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-08-29 19:26:22.893492 | orchestrator | Friday 29 August 2025 19:26:13 +0000 (0:00:01.447) 0:00:06.300 ********* 2025-08-29 19:26:22.893503 | orchestrator | changed: [testbed-manager -> localhost] 2025-08-29 19:26:22.893514 | orchestrator | 2025-08-29 19:26:22.893526 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-08-29 19:26:22.893544 | orchestrator | Friday 29 August 2025 19:26:14 +0000 (0:00:00.694) 0:00:06.995 ********* 2025-08-29 19:26:22.893561 | orchestrator | ok: [testbed-manager] 2025-08-29 19:26:22.893577 | orchestrator | 2025-08-29 19:26:22.893594 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-08-29 19:26:22.893610 | orchestrator | Friday 29 August 2025 19:26:15 +0000 (0:00:00.634) 0:00:07.629 ********* 2025-08-29 19:26:22.893626 | orchestrator | ok: [testbed-manager] 2025-08-29 19:26:22.893644 | orchestrator | 2025-08-29 19:26:22.893662 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:26:22.893681 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:26:22.893693 | orchestrator | 2025-08-29 19:26:22.893704 | orchestrator | 2025-08-29 19:26:22.893716 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:26:22.893727 | orchestrator | Friday 29 August 2025 19:26:15 +0000 (0:00:00.373) 0:00:08.003 ********* 2025-08-29 19:26:22.893738 | orchestrator | =============================================================================== 2025-08-29 19:26:22.893748 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.65s 2025-08-29 19:26:22.893758 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.45s 2025-08-29 19:26:22.893768 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.95s 2025-08-29 19:26:22.893794 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.82s 2025-08-29 19:26:22.893805 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.69s 2025-08-29 19:26:22.893814 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.63s 2025-08-29 19:26:22.893824 | orchestrator | Get home directory of operator user ------------------------------------- 0.63s 2025-08-29 19:26:22.893834 | orchestrator | Create .kube directory -------------------------------------------------- 0.63s 2025-08-29 19:26:22.893843 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.37s 2025-08-29 19:26:22.893853 | orchestrator | 2025-08-29 19:26:22.893862 | orchestrator | 2025-08-29 19:26:22.893872 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 19:26:22.893882 | orchestrator | 2025-08-29 19:26:22.893891 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 19:26:22.893901 | orchestrator | Friday 29 August 2025 19:25:14 +0000 (0:00:00.292) 0:00:00.292 ********* 2025-08-29 19:26:22.893910 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:26:22.893920 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:26:22.893930 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:26:22.893939 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:26:22.893949 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:26:22.893959 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:26:22.893968 | orchestrator | 2025-08-29 19:26:22.893978 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 19:26:22.893988 | orchestrator | Friday 29 August 2025 19:25:15 +0000 (0:00:00.716) 0:00:01.009 ********* 2025-08-29 19:26:22.893998 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 19:26:22.894008 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 19:26:22.894089 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 19:26:22.894103 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 19:26:22.894113 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 19:26:22.894122 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 19:26:22.894132 | orchestrator | 2025-08-29 19:26:22.894141 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-08-29 19:26:22.894151 | orchestrator | 2025-08-29 19:26:22.894160 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-08-29 19:26:22.894173 | orchestrator | Friday 29 August 2025 19:25:16 +0000 (0:00:01.095) 0:00:02.105 ********* 2025-08-29 19:26:22.894190 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:26:22.894206 | orchestrator | 2025-08-29 19:26:22.894221 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-08-29 19:26:22.894236 | orchestrator | Friday 29 August 2025 19:25:18 +0000 (0:00:01.542) 0:00:03.648 ********* 2025-08-29 19:26:22.894252 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-08-29 19:26:22.894263 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-08-29 19:26:22.894273 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-08-29 19:26:22.894283 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-08-29 19:26:22.894292 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-08-29 19:26:22.894302 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-08-29 19:26:22.894311 | orchestrator | 2025-08-29 19:26:22.894328 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-08-29 19:26:22.894338 | orchestrator | Friday 29 August 2025 19:25:19 +0000 (0:00:01.551) 0:00:05.199 ********* 2025-08-29 19:26:22.894348 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-08-29 19:26:22.894357 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-08-29 19:26:22.894367 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-08-29 19:26:22.894376 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-08-29 19:26:22.894386 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-08-29 19:26:22.894396 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-08-29 19:26:22.894405 | orchestrator | 2025-08-29 19:26:22.894415 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-08-29 19:26:22.894457 | orchestrator | Friday 29 August 2025 19:25:21 +0000 (0:00:02.208) 0:00:07.407 ********* 2025-08-29 19:26:22.894467 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-08-29 19:26:22.894477 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:26:22.894487 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-08-29 19:26:22.894497 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:26:22.894506 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-08-29 19:26:22.894516 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:26:22.894526 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-08-29 19:26:22.894535 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:22.894545 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-08-29 19:26:22.894555 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:26:22.894564 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-08-29 19:26:22.894574 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:26:22.894583 | orchestrator | 2025-08-29 19:26:22.894593 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-08-29 19:26:22.894603 | orchestrator | Friday 29 August 2025 19:25:23 +0000 (0:00:01.714) 0:00:09.122 ********* 2025-08-29 19:26:22.894620 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:26:22.894630 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:26:22.894640 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:26:22.894668 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:22.894684 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:26:22.894700 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:26:22.894716 | orchestrator | 2025-08-29 19:26:22.894731 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-08-29 19:26:22.894748 | orchestrator | Friday 29 August 2025 19:25:24 +0000 (0:00:00.821) 0:00:09.944 ********* 2025-08-29 19:26:22.894767 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 19:26:22.894790 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 19:26:22.894810 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 19:26:22.894828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 19:26:22.894840 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 19:26:22.895508 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 19:26:22.895554 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 19:26:22.895570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 19:26:22.895587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 19:26:22.895604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 19:26:22.895621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 19:26:22.895676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 19:26:22.895695 | orchestrator | 2025-08-29 19:26:22.895706 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-08-29 19:26:22.895717 | orchestrator | Friday 29 August 2025 19:25:27 +0000 (0:00:02.677) 0:00:12.622 ********* 2025-08-29 19:26:22.895727 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 19:26:22.895737 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 19:26:22.895748 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 19:26:22.895758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 19:26:22.895783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 19:26:22.895808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 19:26:22.895826 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 19:26:22.895843 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 19:26:22.895859 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 19:26:22.895875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 19:26:22.895913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 19:26:22.895938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 19:26:22.895957 | orchestrator | 2025-08-29 19:26:22.895968 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-08-29 19:26:22.895978 | orchestrator | Friday 29 August 2025 19:25:29 +0000 (0:00:02.668) 0:00:15.291 ********* 2025-08-29 19:26:22.895988 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:26:22.895998 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:26:22.896008 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:26:22.896018 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:26:22.896028 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:26:22.896038 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:26:22.896048 | orchestrator | 2025-08-29 19:26:22.896058 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-08-29 19:26:22.896068 | orchestrator | Friday 29 August 2025 19:25:31 +0000 (0:00:01.574) 0:00:16.865 ********* 2025-08-29 19:26:22.896078 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 19:26:22.896088 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 19:26:22.896105 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 19:26:22.896122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 19:26:22.896137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 19:26:22.896148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 19:26:22.896158 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 19:26:22.896168 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 19:26:22.896184 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 19:26:22.896206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 19:26:22.896216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 19:26:22.896226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 19:26:22.896236 | orchestrator | 2025-08-29 19:26:22.896246 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 19:26:22.896256 | orchestrator | Friday 29 August 2025 19:25:33 +0000 (0:00:02.099) 0:00:18.964 ********* 2025-08-29 19:26:22.896266 | orchestrator | 2025-08-29 19:26:22.896276 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 19:26:22.896285 | orchestrator | Friday 29 August 2025 19:25:34 +0000 (0:00:00.850) 0:00:19.814 ********* 2025-08-29 19:26:22.896295 | orchestrator | 2025-08-29 19:26:22.896314 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 19:26:22.896324 | orchestrator | Friday 29 August 2025 19:25:34 +0000 (0:00:00.625) 0:00:20.440 ********* 2025-08-29 19:26:22.896333 | orchestrator | 2025-08-29 19:26:22.896343 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 19:26:22.896353 | orchestrator | Friday 29 August 2025 19:25:35 +0000 (0:00:00.254) 0:00:20.695 ********* 2025-08-29 19:26:22.896362 | orchestrator | 2025-08-29 19:26:22.896372 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 19:26:22.896381 | orchestrator | Friday 29 August 2025 19:25:35 +0000 (0:00:00.274) 0:00:20.969 ********* 2025-08-29 19:26:22.896391 | orchestrator | 2025-08-29 19:26:22.896401 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 19:26:22.896410 | orchestrator | Friday 29 August 2025 19:25:36 +0000 (0:00:00.606) 0:00:21.576 ********* 2025-08-29 19:26:22.896452 | orchestrator | 2025-08-29 19:26:22.896462 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-08-29 19:26:22.896472 | orchestrator | Friday 29 August 2025 19:25:36 +0000 (0:00:00.423) 0:00:21.999 ********* 2025-08-29 19:26:22.896481 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:26:22.896491 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:26:22.896501 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:26:22.896511 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:26:22.896520 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:26:22.896530 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:26:22.896539 | orchestrator | 2025-08-29 19:26:22.896549 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-08-29 19:26:22.896559 | orchestrator | Friday 29 August 2025 19:25:43 +0000 (0:00:06.650) 0:00:28.650 ********* 2025-08-29 19:26:22.896569 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:26:22.896579 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:26:22.896589 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:26:22.896598 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:26:22.896608 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:26:22.896618 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:26:22.896627 | orchestrator | 2025-08-29 19:26:22.896637 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-08-29 19:26:22.896647 | orchestrator | Friday 29 August 2025 19:25:44 +0000 (0:00:01.741) 0:00:30.392 ********* 2025-08-29 19:26:22.896656 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:26:22.896666 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:26:22.896675 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:26:22.896685 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:26:22.896694 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:26:22.896704 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:26:22.896713 | orchestrator | 2025-08-29 19:26:22.896723 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-08-29 19:26:22.896733 | orchestrator | Friday 29 August 2025 19:25:56 +0000 (0:00:11.773) 0:00:42.165 ********* 2025-08-29 19:26:22.896742 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-08-29 19:26:22.896758 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-08-29 19:26:22.896769 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-08-29 19:26:22.896783 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-08-29 19:26:22.896793 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-08-29 19:26:22.896803 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-08-29 19:26:22.896812 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-08-29 19:26:22.896828 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-08-29 19:26:22.896838 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-08-29 19:26:22.896848 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-08-29 19:26:22.896857 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-08-29 19:26:22.896867 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-08-29 19:26:22.896877 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 19:26:22.896886 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 19:26:22.896895 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 19:26:22.896905 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 19:26:22.896914 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 19:26:22.896928 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 19:26:22.896946 | orchestrator | 2025-08-29 19:26:22.896962 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-08-29 19:26:22.896978 | orchestrator | Friday 29 August 2025 19:26:04 +0000 (0:00:07.955) 0:00:50.121 ********* 2025-08-29 19:26:22.896995 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-08-29 19:26:22.897011 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:26:22.897026 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-08-29 19:26:22.897041 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:26:22.897056 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-08-29 19:26:22.897071 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:26:22.897088 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-08-29 19:26:22.897105 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-08-29 19:26:22.897122 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-08-29 19:26:22.897139 | orchestrator | 2025-08-29 19:26:22.897156 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-08-29 19:26:22.897218 | orchestrator | Friday 29 August 2025 19:26:09 +0000 (0:00:04.552) 0:00:54.674 ********* 2025-08-29 19:26:22.897228 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-08-29 19:26:22.897237 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:26:22.897245 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-08-29 19:26:22.897253 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:26:22.897261 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-08-29 19:26:22.897269 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:26:22.897276 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-08-29 19:26:22.897284 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-08-29 19:26:22.897292 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-08-29 19:26:22.897300 | orchestrator | 2025-08-29 19:26:22.897308 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-08-29 19:26:22.897316 | orchestrator | Friday 29 August 2025 19:26:13 +0000 (0:00:04.680) 0:00:59.355 ********* 2025-08-29 19:26:22.897323 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:26:22.897331 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:26:22.897339 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:26:22.897355 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:26:22.897363 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:26:22.897371 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:26:22.897379 | orchestrator | 2025-08-29 19:26:22.897387 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:26:22.897396 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 19:26:22.897405 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 19:26:22.897439 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 19:26:22.897453 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 19:26:22.897461 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 19:26:22.897469 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 19:26:22.897477 | orchestrator | 2025-08-29 19:26:22.897485 | orchestrator | 2025-08-29 19:26:22.897493 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:26:22.897501 | orchestrator | Friday 29 August 2025 19:26:22 +0000 (0:00:08.455) 0:01:07.811 ********* 2025-08-29 19:26:22.897509 | orchestrator | =============================================================================== 2025-08-29 19:26:22.897517 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 20.23s 2025-08-29 19:26:22.897525 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.96s 2025-08-29 19:26:22.897532 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 6.65s 2025-08-29 19:26:22.897540 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.68s 2025-08-29 19:26:22.897548 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 4.55s 2025-08-29 19:26:22.897556 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 3.04s 2025-08-29 19:26:22.897564 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.68s 2025-08-29 19:26:22.897572 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.67s 2025-08-29 19:26:22.897580 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.21s 2025-08-29 19:26:22.897588 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.10s 2025-08-29 19:26:22.897596 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.74s 2025-08-29 19:26:22.897603 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.71s 2025-08-29 19:26:22.897611 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.57s 2025-08-29 19:26:22.897619 | orchestrator | module-load : Load modules ---------------------------------------------- 1.55s 2025-08-29 19:26:22.897627 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.54s 2025-08-29 19:26:22.897635 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.10s 2025-08-29 19:26:22.897642 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.82s 2025-08-29 19:26:22.897650 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.72s 2025-08-29 19:26:22.897658 | orchestrator | 2025-08-29 19:26:22 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:26:22.897666 | orchestrator | 2025-08-29 19:26:22 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:26:22.897679 | orchestrator | 2025-08-29 19:26:22 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:26:22.897687 | orchestrator | 2025-08-29 19:26:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:26:25.918959 | orchestrator | 2025-08-29 19:26:25 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:26:25.919251 | orchestrator | 2025-08-29 19:26:25 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:26:25.922066 | orchestrator | 2025-08-29 19:26:25 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:26:25.923410 | orchestrator | 2025-08-29 19:26:25 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:26:25.923511 | orchestrator | 2025-08-29 19:26:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:26:28.955641 | orchestrator | 2025-08-29 19:26:28 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:26:28.955747 | orchestrator | 2025-08-29 19:26:28 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:26:28.956356 | orchestrator | 2025-08-29 19:26:28 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:26:28.957174 | orchestrator | 2025-08-29 19:26:28 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:26:28.957204 | orchestrator | 2025-08-29 19:26:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:26:31.999821 | orchestrator | 2025-08-29 19:26:31 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:26:32.000473 | orchestrator | 2025-08-29 19:26:31 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:26:32.001031 | orchestrator | 2025-08-29 19:26:31 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:26:32.002119 | orchestrator | 2025-08-29 19:26:31 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:26:32.002163 | orchestrator | 2025-08-29 19:26:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:26:35.031509 | orchestrator | 2025-08-29 19:26:35 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:26:35.031591 | orchestrator | 2025-08-29 19:26:35 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:26:35.032484 | orchestrator | 2025-08-29 19:26:35 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:26:35.033191 | orchestrator | 2025-08-29 19:26:35 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:26:35.033241 | orchestrator | 2025-08-29 19:26:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:26:38.065927 | orchestrator | 2025-08-29 19:26:38 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:26:38.068546 | orchestrator | 2025-08-29 19:26:38 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:26:38.068625 | orchestrator | 2025-08-29 19:26:38 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:26:38.068648 | orchestrator | 2025-08-29 19:26:38 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:26:38.068668 | orchestrator | 2025-08-29 19:26:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:26:41.097888 | orchestrator | 2025-08-29 19:26:41 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:26:41.098340 | orchestrator | 2025-08-29 19:26:41 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:26:41.099521 | orchestrator | 2025-08-29 19:26:41 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:26:41.100193 | orchestrator | 2025-08-29 19:26:41 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:26:41.100321 | orchestrator | 2025-08-29 19:26:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:26:44.140204 | orchestrator | 2025-08-29 19:26:44 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:26:44.141872 | orchestrator | 2025-08-29 19:26:44 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:26:44.143101 | orchestrator | 2025-08-29 19:26:44 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:26:44.144233 | orchestrator | 2025-08-29 19:26:44 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:26:44.145092 | orchestrator | 2025-08-29 19:26:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:26:47.180766 | orchestrator | 2025-08-29 19:26:47 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:26:47.181921 | orchestrator | 2025-08-29 19:26:47 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:26:47.183164 | orchestrator | 2025-08-29 19:26:47 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:26:47.184583 | orchestrator | 2025-08-29 19:26:47 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:26:47.184626 | orchestrator | 2025-08-29 19:26:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:26:50.226983 | orchestrator | 2025-08-29 19:26:50 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:26:50.228017 | orchestrator | 2025-08-29 19:26:50 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:26:50.228989 | orchestrator | 2025-08-29 19:26:50 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:26:50.229956 | orchestrator | 2025-08-29 19:26:50 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:26:50.230074 | orchestrator | 2025-08-29 19:26:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:26:53.281092 | orchestrator | 2025-08-29 19:26:53 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:26:53.283323 | orchestrator | 2025-08-29 19:26:53 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:26:53.284563 | orchestrator | 2025-08-29 19:26:53 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:26:53.286148 | orchestrator | 2025-08-29 19:26:53 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:26:53.286303 | orchestrator | 2025-08-29 19:26:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:26:56.338860 | orchestrator | 2025-08-29 19:26:56 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:26:56.343373 | orchestrator | 2025-08-29 19:26:56 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:26:56.345133 | orchestrator | 2025-08-29 19:26:56 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:26:56.346850 | orchestrator | 2025-08-29 19:26:56 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:26:56.346925 | orchestrator | 2025-08-29 19:26:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:26:59.408485 | orchestrator | 2025-08-29 19:26:59 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:26:59.409451 | orchestrator | 2025-08-29 19:26:59 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:26:59.409993 | orchestrator | 2025-08-29 19:26:59 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:26:59.410857 | orchestrator | 2025-08-29 19:26:59 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:26:59.410886 | orchestrator | 2025-08-29 19:26:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:27:02.596749 | orchestrator | 2025-08-29 19:27:02 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:27:02.597579 | orchestrator | 2025-08-29 19:27:02 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:27:02.598118 | orchestrator | 2025-08-29 19:27:02 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:27:02.598985 | orchestrator | 2025-08-29 19:27:02 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:27:02.599465 | orchestrator | 2025-08-29 19:27:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:27:05.632141 | orchestrator | 2025-08-29 19:27:05 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:27:05.633193 | orchestrator | 2025-08-29 19:27:05 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:27:05.634683 | orchestrator | 2025-08-29 19:27:05 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:27:05.637086 | orchestrator | 2025-08-29 19:27:05 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:27:05.637159 | orchestrator | 2025-08-29 19:27:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:27:08.669786 | orchestrator | 2025-08-29 19:27:08 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:27:08.671404 | orchestrator | 2025-08-29 19:27:08 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:27:08.673884 | orchestrator | 2025-08-29 19:27:08 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:27:08.675681 | orchestrator | 2025-08-29 19:27:08 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:27:08.676167 | orchestrator | 2025-08-29 19:27:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:27:11.733390 | orchestrator | 2025-08-29 19:27:11 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:27:11.739020 | orchestrator | 2025-08-29 19:27:11 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:27:11.742916 | orchestrator | 2025-08-29 19:27:11 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:27:11.746761 | orchestrator | 2025-08-29 19:27:11 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:27:11.746839 | orchestrator | 2025-08-29 19:27:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:27:14.787701 | orchestrator | 2025-08-29 19:27:14 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:27:14.790656 | orchestrator | 2025-08-29 19:27:14 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:27:14.793203 | orchestrator | 2025-08-29 19:27:14 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:27:14.794928 | orchestrator | 2025-08-29 19:27:14 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:27:14.794997 | orchestrator | 2025-08-29 19:27:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:27:17.842851 | orchestrator | 2025-08-29 19:27:17 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:27:17.844241 | orchestrator | 2025-08-29 19:27:17 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:27:17.845489 | orchestrator | 2025-08-29 19:27:17 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:27:17.847095 | orchestrator | 2025-08-29 19:27:17 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:27:17.847148 | orchestrator | 2025-08-29 19:27:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:27:20.890634 | orchestrator | 2025-08-29 19:27:20 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:27:20.891091 | orchestrator | 2025-08-29 19:27:20 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:27:20.891868 | orchestrator | 2025-08-29 19:27:20 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:27:20.892532 | orchestrator | 2025-08-29 19:27:20 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:27:20.892555 | orchestrator | 2025-08-29 19:27:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:27:23.944827 | orchestrator | 2025-08-29 19:27:23 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:27:23.947675 | orchestrator | 2025-08-29 19:27:23 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:27:23.950426 | orchestrator | 2025-08-29 19:27:23 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:27:23.952476 | orchestrator | 2025-08-29 19:27:23 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:27:23.952777 | orchestrator | 2025-08-29 19:27:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:27:26.993078 | orchestrator | 2025-08-29 19:27:26 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:27:26.993181 | orchestrator | 2025-08-29 19:27:26 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:27:26.993618 | orchestrator | 2025-08-29 19:27:26 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:27:26.996777 | orchestrator | 2025-08-29 19:27:26 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:27:26.996821 | orchestrator | 2025-08-29 19:27:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:27:30.049646 | orchestrator | 2025-08-29 19:27:30 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:27:30.053275 | orchestrator | 2025-08-29 19:27:30 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:27:30.054108 | orchestrator | 2025-08-29 19:27:30 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:27:30.056131 | orchestrator | 2025-08-29 19:27:30 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:27:30.056176 | orchestrator | 2025-08-29 19:27:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:27:33.101996 | orchestrator | 2025-08-29 19:27:33 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:27:33.103950 | orchestrator | 2025-08-29 19:27:33 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:27:33.105936 | orchestrator | 2025-08-29 19:27:33 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:27:33.107579 | orchestrator | 2025-08-29 19:27:33 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:27:33.107631 | orchestrator | 2025-08-29 19:27:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:27:36.156381 | orchestrator | 2025-08-29 19:27:36 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:27:36.158938 | orchestrator | 2025-08-29 19:27:36 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:27:36.161597 | orchestrator | 2025-08-29 19:27:36 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:27:36.164883 | orchestrator | 2025-08-29 19:27:36 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:27:36.165507 | orchestrator | 2025-08-29 19:27:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:27:39.210518 | orchestrator | 2025-08-29 19:27:39 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:27:39.210674 | orchestrator | 2025-08-29 19:27:39 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:27:39.213630 | orchestrator | 2025-08-29 19:27:39 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:27:39.214863 | orchestrator | 2025-08-29 19:27:39 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:27:39.214975 | orchestrator | 2025-08-29 19:27:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:27:42.264273 | orchestrator | 2025-08-29 19:27:42 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:27:42.264575 | orchestrator | 2025-08-29 19:27:42 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:27:42.266009 | orchestrator | 2025-08-29 19:27:42 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:27:42.266404 | orchestrator | 2025-08-29 19:27:42 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:27:42.266429 | orchestrator | 2025-08-29 19:27:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:27:45.314510 | orchestrator | 2025-08-29 19:27:45 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:27:45.314664 | orchestrator | 2025-08-29 19:27:45 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:27:45.315769 | orchestrator | 2025-08-29 19:27:45 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:27:45.316646 | orchestrator | 2025-08-29 19:27:45 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:27:45.316671 | orchestrator | 2025-08-29 19:27:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:27:48.357421 | orchestrator | 2025-08-29 19:27:48 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:27:48.358588 | orchestrator | 2025-08-29 19:27:48 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:27:48.359739 | orchestrator | 2025-08-29 19:27:48 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:27:48.361815 | orchestrator | 2025-08-29 19:27:48 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:27:48.361873 | orchestrator | 2025-08-29 19:27:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:27:51.392457 | orchestrator | 2025-08-29 19:27:51 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:27:51.392643 | orchestrator | 2025-08-29 19:27:51 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:27:51.393307 | orchestrator | 2025-08-29 19:27:51 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:27:51.394100 | orchestrator | 2025-08-29 19:27:51 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:27:51.394118 | orchestrator | 2025-08-29 19:27:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:27:54.436832 | orchestrator | 2025-08-29 19:27:54 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:27:54.438135 | orchestrator | 2025-08-29 19:27:54 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:27:54.439919 | orchestrator | 2025-08-29 19:27:54 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:27:54.442787 | orchestrator | 2025-08-29 19:27:54 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:27:54.442834 | orchestrator | 2025-08-29 19:27:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:27:57.479873 | orchestrator | 2025-08-29 19:27:57 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:27:57.479974 | orchestrator | 2025-08-29 19:27:57 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:27:57.480246 | orchestrator | 2025-08-29 19:27:57 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:27:57.481136 | orchestrator | 2025-08-29 19:27:57 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:27:57.481186 | orchestrator | 2025-08-29 19:27:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:28:00.507617 | orchestrator | 2025-08-29 19:28:00 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state STARTED 2025-08-29 19:28:00.508960 | orchestrator | 2025-08-29 19:28:00 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:28:00.509854 | orchestrator | 2025-08-29 19:28:00 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:28:00.511236 | orchestrator | 2025-08-29 19:28:00 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:28:00.511302 | orchestrator | 2025-08-29 19:28:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:28:03.548682 | orchestrator | 2025-08-29 19:28:03 | INFO  | Task a0e1e446-54f8-482f-bef2-a5aef842785b is in state SUCCESS 2025-08-29 19:28:03.549895 | orchestrator | 2025-08-29 19:28:03.549934 | orchestrator | 2025-08-29 19:28:03.549948 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-08-29 19:28:03.549962 | orchestrator | 2025-08-29 19:28:03.549974 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-08-29 19:28:03.549987 | orchestrator | Friday 29 August 2025 19:25:36 +0000 (0:00:00.155) 0:00:00.155 ********* 2025-08-29 19:28:03.550000 | orchestrator | ok: [localhost] => { 2025-08-29 19:28:03.550062 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-08-29 19:28:03.550076 | orchestrator | } 2025-08-29 19:28:03.550089 | orchestrator | 2025-08-29 19:28:03.550102 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-08-29 19:28:03.550115 | orchestrator | Friday 29 August 2025 19:25:36 +0000 (0:00:00.052) 0:00:00.207 ********* 2025-08-29 19:28:03.550129 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-08-29 19:28:03.550170 | orchestrator | ...ignoring 2025-08-29 19:28:03.550183 | orchestrator | 2025-08-29 19:28:03.550195 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-08-29 19:28:03.550208 | orchestrator | Friday 29 August 2025 19:25:40 +0000 (0:00:03.500) 0:00:03.708 ********* 2025-08-29 19:28:03.550221 | orchestrator | skipping: [localhost] 2025-08-29 19:28:03.550259 | orchestrator | 2025-08-29 19:28:03.550274 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-08-29 19:28:03.550287 | orchestrator | Friday 29 August 2025 19:25:40 +0000 (0:00:00.049) 0:00:03.758 ********* 2025-08-29 19:28:03.550299 | orchestrator | ok: [localhost] 2025-08-29 19:28:03.550311 | orchestrator | 2025-08-29 19:28:03.550325 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 19:28:03.550338 | orchestrator | 2025-08-29 19:28:03.550351 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 19:28:03.550364 | orchestrator | Friday 29 August 2025 19:25:40 +0000 (0:00:00.197) 0:00:03.955 ********* 2025-08-29 19:28:03.550378 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:28:03.550391 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:28:03.550405 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:28:03.550418 | orchestrator | 2025-08-29 19:28:03.550430 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 19:28:03.550442 | orchestrator | Friday 29 August 2025 19:25:40 +0000 (0:00:00.300) 0:00:04.256 ********* 2025-08-29 19:28:03.550455 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-08-29 19:28:03.550469 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-08-29 19:28:03.550483 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-08-29 19:28:03.550495 | orchestrator | 2025-08-29 19:28:03.550508 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-08-29 19:28:03.550522 | orchestrator | 2025-08-29 19:28:03.550536 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-08-29 19:28:03.550549 | orchestrator | Friday 29 August 2025 19:25:41 +0000 (0:00:00.554) 0:00:04.811 ********* 2025-08-29 19:28:03.550645 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:28:03.550669 | orchestrator | 2025-08-29 19:28:03.550682 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-08-29 19:28:03.550695 | orchestrator | Friday 29 August 2025 19:25:42 +0000 (0:00:00.653) 0:00:05.465 ********* 2025-08-29 19:28:03.550708 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:28:03.550721 | orchestrator | 2025-08-29 19:28:03.550736 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-08-29 19:28:03.550750 | orchestrator | Friday 29 August 2025 19:25:43 +0000 (0:00:01.062) 0:00:06.527 ********* 2025-08-29 19:28:03.550763 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:28:03.550778 | orchestrator | 2025-08-29 19:28:03.550792 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-08-29 19:28:03.550807 | orchestrator | Friday 29 August 2025 19:25:43 +0000 (0:00:00.851) 0:00:07.378 ********* 2025-08-29 19:28:03.550820 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:28:03.550835 | orchestrator | 2025-08-29 19:28:03.550850 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-08-29 19:28:03.550863 | orchestrator | Friday 29 August 2025 19:25:44 +0000 (0:00:00.546) 0:00:07.924 ********* 2025-08-29 19:28:03.550876 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:28:03.550888 | orchestrator | 2025-08-29 19:28:03.551022 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-08-29 19:28:03.551037 | orchestrator | Friday 29 August 2025 19:25:45 +0000 (0:00:00.561) 0:00:08.486 ********* 2025-08-29 19:28:03.551049 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:28:03.551062 | orchestrator | 2025-08-29 19:28:03.551073 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-08-29 19:28:03.551102 | orchestrator | Friday 29 August 2025 19:25:46 +0000 (0:00:01.162) 0:00:09.648 ********* 2025-08-29 19:28:03.551122 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:28:03.551135 | orchestrator | 2025-08-29 19:28:03.551147 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-08-29 19:28:03.551160 | orchestrator | Friday 29 August 2025 19:25:49 +0000 (0:00:03.714) 0:00:13.363 ********* 2025-08-29 19:28:03.551173 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:28:03.551186 | orchestrator | 2025-08-29 19:28:03.551201 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-08-29 19:28:03.551213 | orchestrator | Friday 29 August 2025 19:25:50 +0000 (0:00:01.000) 0:00:14.363 ********* 2025-08-29 19:28:03.551226 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:28:03.551264 | orchestrator | 2025-08-29 19:28:03.551277 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-08-29 19:28:03.551290 | orchestrator | Friday 29 August 2025 19:25:51 +0000 (0:00:00.331) 0:00:14.695 ********* 2025-08-29 19:28:03.551303 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:28:03.551316 | orchestrator | 2025-08-29 19:28:03.551344 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-08-29 19:28:03.551357 | orchestrator | Friday 29 August 2025 19:25:51 +0000 (0:00:00.270) 0:00:14.966 ********* 2025-08-29 19:28:03.551376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 19:28:03.551394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 19:28:03.551410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 19:28:03.551435 | orchestrator | 2025-08-29 19:28:03.551448 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-08-29 19:28:03.551467 | orchestrator | Friday 29 August 2025 19:25:53 +0000 (0:00:01.772) 0:00:16.738 ********* 2025-08-29 19:28:03.551491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 19:28:03.551506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 19:28:03.551520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 19:28:03.551541 | orchestrator | 2025-08-29 19:28:03.551555 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-08-29 19:28:03.551568 | orchestrator | Friday 29 August 2025 19:25:55 +0000 (0:00:02.401) 0:00:19.140 ********* 2025-08-29 19:28:03.551580 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-08-29 19:28:03.551594 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-08-29 19:28:03.551607 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-08-29 19:28:03.551620 | orchestrator | 2025-08-29 19:28:03.551633 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-08-29 19:28:03.551646 | orchestrator | Friday 29 August 2025 19:25:58 +0000 (0:00:02.340) 0:00:21.481 ********* 2025-08-29 19:28:03.551659 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-08-29 19:28:03.551673 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-08-29 19:28:03.551686 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-08-29 19:28:03.551698 | orchestrator | 2025-08-29 19:28:03.551716 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-08-29 19:28:03.551729 | orchestrator | Friday 29 August 2025 19:26:01 +0000 (0:00:03.068) 0:00:24.550 ********* 2025-08-29 19:28:03.551742 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-08-29 19:28:03.551755 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-08-29 19:28:03.551767 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-08-29 19:28:03.551781 | orchestrator | 2025-08-29 19:28:03.551794 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-08-29 19:28:03.551807 | orchestrator | Friday 29 August 2025 19:26:03 +0000 (0:00:02.181) 0:00:26.731 ********* 2025-08-29 19:28:03.551827 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-08-29 19:28:03.551841 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-08-29 19:28:03.551956 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-08-29 19:28:03.551969 | orchestrator | 2025-08-29 19:28:03.551982 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-08-29 19:28:03.551996 | orchestrator | Friday 29 August 2025 19:26:07 +0000 (0:00:04.298) 0:00:31.029 ********* 2025-08-29 19:28:03.552009 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-08-29 19:28:03.552021 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-08-29 19:28:03.552034 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-08-29 19:28:03.552047 | orchestrator | 2025-08-29 19:28:03.552059 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-08-29 19:28:03.552072 | orchestrator | Friday 29 August 2025 19:26:09 +0000 (0:00:01.842) 0:00:32.872 ********* 2025-08-29 19:28:03.552086 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-08-29 19:28:03.552172 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-08-29 19:28:03.552187 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-08-29 19:28:03.552200 | orchestrator | 2025-08-29 19:28:03.552212 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-08-29 19:28:03.552225 | orchestrator | Friday 29 August 2025 19:26:11 +0000 (0:00:02.341) 0:00:35.214 ********* 2025-08-29 19:28:03.552284 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:28:03.552298 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:28:03.552311 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:28:03.552324 | orchestrator | 2025-08-29 19:28:03.552336 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-08-29 19:28:03.552349 | orchestrator | Friday 29 August 2025 19:26:12 +0000 (0:00:00.879) 0:00:36.093 ********* 2025-08-29 19:28:03.552364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 19:28:03.552384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 19:28:03.552408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 19:28:03.552422 | orchestrator | 2025-08-29 19:28:03.552435 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-08-29 19:28:03.552448 | orchestrator | Friday 29 August 2025 19:26:14 +0000 (0:00:01.741) 0:00:37.835 ********* 2025-08-29 19:28:03.552460 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:28:03.552473 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:28:03.552493 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:28:03.552506 | orchestrator | 2025-08-29 19:28:03.552518 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-08-29 19:28:03.552532 | orchestrator | Friday 29 August 2025 19:26:15 +0000 (0:00:01.350) 0:00:39.185 ********* 2025-08-29 19:28:03.552545 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:28:03.552558 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:28:03.552570 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:28:03.552583 | orchestrator | 2025-08-29 19:28:03.552595 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-08-29 19:28:03.552608 | orchestrator | Friday 29 August 2025 19:26:22 +0000 (0:00:06.341) 0:00:45.527 ********* 2025-08-29 19:28:03.552620 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:28:03.552632 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:28:03.552645 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:28:03.552657 | orchestrator | 2025-08-29 19:28:03.552670 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-08-29 19:28:03.552683 | orchestrator | 2025-08-29 19:28:03.552695 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-08-29 19:28:03.552708 | orchestrator | Friday 29 August 2025 19:26:22 +0000 (0:00:00.709) 0:00:46.236 ********* 2025-08-29 19:28:03.552721 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:28:03.552733 | orchestrator | 2025-08-29 19:28:03.552746 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-08-29 19:28:03.552759 | orchestrator | Friday 29 August 2025 19:26:23 +0000 (0:00:00.779) 0:00:47.016 ********* 2025-08-29 19:28:03.552772 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:28:03.552783 | orchestrator | 2025-08-29 19:28:03.552797 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-08-29 19:28:03.552810 | orchestrator | Friday 29 August 2025 19:26:23 +0000 (0:00:00.248) 0:00:47.264 ********* 2025-08-29 19:28:03.552823 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:28:03.552835 | orchestrator | 2025-08-29 19:28:03.552848 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-08-29 19:28:03.552862 | orchestrator | Friday 29 August 2025 19:26:25 +0000 (0:00:02.058) 0:00:49.323 ********* 2025-08-29 19:28:03.552875 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:28:03.552888 | orchestrator | 2025-08-29 19:28:03.552900 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-08-29 19:28:03.552914 | orchestrator | 2025-08-29 19:28:03.552927 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-08-29 19:28:03.552941 | orchestrator | Friday 29 August 2025 19:27:20 +0000 (0:00:54.401) 0:01:43.724 ********* 2025-08-29 19:28:03.552956 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:28:03.552970 | orchestrator | 2025-08-29 19:28:03.552984 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-08-29 19:28:03.552996 | orchestrator | Friday 29 August 2025 19:27:20 +0000 (0:00:00.593) 0:01:44.318 ********* 2025-08-29 19:28:03.553009 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:28:03.553022 | orchestrator | 2025-08-29 19:28:03.553035 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-08-29 19:28:03.553047 | orchestrator | Friday 29 August 2025 19:27:21 +0000 (0:00:00.213) 0:01:44.531 ********* 2025-08-29 19:28:03.553061 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:28:03.553076 | orchestrator | 2025-08-29 19:28:03.553090 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-08-29 19:28:03.553104 | orchestrator | Friday 29 August 2025 19:27:23 +0000 (0:00:01.951) 0:01:46.483 ********* 2025-08-29 19:28:03.553118 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:28:03.553130 | orchestrator | 2025-08-29 19:28:03.553143 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-08-29 19:28:03.553156 | orchestrator | 2025-08-29 19:28:03.553171 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-08-29 19:28:03.553190 | orchestrator | Friday 29 August 2025 19:27:40 +0000 (0:00:17.769) 0:02:04.252 ********* 2025-08-29 19:28:03.553210 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:28:03.553225 | orchestrator | 2025-08-29 19:28:03.553310 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-08-29 19:28:03.553327 | orchestrator | Friday 29 August 2025 19:27:41 +0000 (0:00:00.585) 0:02:04.838 ********* 2025-08-29 19:28:03.553341 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:28:03.553355 | orchestrator | 2025-08-29 19:28:03.553368 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-08-29 19:28:03.553381 | orchestrator | Friday 29 August 2025 19:27:41 +0000 (0:00:00.263) 0:02:05.101 ********* 2025-08-29 19:28:03.553394 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:28:03.553407 | orchestrator | 2025-08-29 19:28:03.553419 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-08-29 19:28:03.553442 | orchestrator | Friday 29 August 2025 19:27:43 +0000 (0:00:01.567) 0:02:06.669 ********* 2025-08-29 19:28:03.553455 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:28:03.553468 | orchestrator | 2025-08-29 19:28:03.553480 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-08-29 19:28:03.553493 | orchestrator | 2025-08-29 19:28:03.553506 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-08-29 19:28:03.553518 | orchestrator | Friday 29 August 2025 19:27:58 +0000 (0:00:14.916) 0:02:21.585 ********* 2025-08-29 19:28:03.553531 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:28:03.553544 | orchestrator | 2025-08-29 19:28:03.553556 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-08-29 19:28:03.553570 | orchestrator | Friday 29 August 2025 19:27:58 +0000 (0:00:00.519) 0:02:22.104 ********* 2025-08-29 19:28:03.553582 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-08-29 19:28:03.553595 | orchestrator | enable_outward_rabbitmq_True 2025-08-29 19:28:03.553608 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-08-29 19:28:03.553621 | orchestrator | outward_rabbitmq_restart 2025-08-29 19:28:03.553633 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:28:03.553646 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:28:03.553659 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:28:03.553671 | orchestrator | 2025-08-29 19:28:03.553684 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-08-29 19:28:03.553697 | orchestrator | skipping: no hosts matched 2025-08-29 19:28:03.553709 | orchestrator | 2025-08-29 19:28:03.553722 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-08-29 19:28:03.553735 | orchestrator | skipping: no hosts matched 2025-08-29 19:28:03.553748 | orchestrator | 2025-08-29 19:28:03.553761 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-08-29 19:28:03.553774 | orchestrator | skipping: no hosts matched 2025-08-29 19:28:03.553787 | orchestrator | 2025-08-29 19:28:03.553800 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:28:03.553813 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-08-29 19:28:03.553827 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-08-29 19:28:03.553841 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 19:28:03.553853 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 19:28:03.553865 | orchestrator | 2025-08-29 19:28:03.553878 | orchestrator | 2025-08-29 19:28:03.553889 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:28:03.553899 | orchestrator | Friday 29 August 2025 19:28:01 +0000 (0:00:02.684) 0:02:24.789 ********* 2025-08-29 19:28:03.553918 | orchestrator | =============================================================================== 2025-08-29 19:28:03.553929 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 87.09s 2025-08-29 19:28:03.553940 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.34s 2025-08-29 19:28:03.553950 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 5.58s 2025-08-29 19:28:03.553962 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 4.30s 2025-08-29 19:28:03.553972 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 3.71s 2025-08-29 19:28:03.553983 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.50s 2025-08-29 19:28:03.553994 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.07s 2025-08-29 19:28:03.554004 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.68s 2025-08-29 19:28:03.554062 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.40s 2025-08-29 19:28:03.554076 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.34s 2025-08-29 19:28:03.554088 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.34s 2025-08-29 19:28:03.554099 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.18s 2025-08-29 19:28:03.554111 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.96s 2025-08-29 19:28:03.554122 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.84s 2025-08-29 19:28:03.554133 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.77s 2025-08-29 19:28:03.554149 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.74s 2025-08-29 19:28:03.554160 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.35s 2025-08-29 19:28:03.554171 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 1.16s 2025-08-29 19:28:03.554183 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.06s 2025-08-29 19:28:03.554195 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.00s 2025-08-29 19:28:03.554207 | orchestrator | 2025-08-29 19:28:03 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:28:03.554218 | orchestrator | 2025-08-29 19:28:03 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:28:03.554258 | orchestrator | 2025-08-29 19:28:03 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:28:03.554270 | orchestrator | 2025-08-29 19:28:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:28:06.609735 | orchestrator | 2025-08-29 19:28:06 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:28:06.611668 | orchestrator | 2025-08-29 19:28:06 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:28:06.612486 | orchestrator | 2025-08-29 19:28:06 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:28:06.612515 | orchestrator | 2025-08-29 19:28:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:28:09.648261 | orchestrator | 2025-08-29 19:28:09 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:28:09.648400 | orchestrator | 2025-08-29 19:28:09 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:28:09.650176 | orchestrator | 2025-08-29 19:28:09 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:28:09.650200 | orchestrator | 2025-08-29 19:28:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:28:12.696411 | orchestrator | 2025-08-29 19:28:12 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:28:12.697846 | orchestrator | 2025-08-29 19:28:12 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:28:12.699374 | orchestrator | 2025-08-29 19:28:12 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:28:12.699405 | orchestrator | 2025-08-29 19:28:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:28:15.738955 | orchestrator | 2025-08-29 19:28:15 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:28:15.741080 | orchestrator | 2025-08-29 19:28:15 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:28:15.742251 | orchestrator | 2025-08-29 19:28:15 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:28:15.742291 | orchestrator | 2025-08-29 19:28:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:28:18.778981 | orchestrator | 2025-08-29 19:28:18 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:28:18.780278 | orchestrator | 2025-08-29 19:28:18 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:28:18.783325 | orchestrator | 2025-08-29 19:28:18 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:28:18.783389 | orchestrator | 2025-08-29 19:28:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:28:21.845011 | orchestrator | 2025-08-29 19:28:21 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:28:21.845705 | orchestrator | 2025-08-29 19:28:21 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:28:21.846837 | orchestrator | 2025-08-29 19:28:21 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:28:21.846893 | orchestrator | 2025-08-29 19:28:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:28:24.888088 | orchestrator | 2025-08-29 19:28:24 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:28:24.889197 | orchestrator | 2025-08-29 19:28:24 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:28:24.890741 | orchestrator | 2025-08-29 19:28:24 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:28:24.890778 | orchestrator | 2025-08-29 19:28:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:28:27.923696 | orchestrator | 2025-08-29 19:28:27 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:28:27.925610 | orchestrator | 2025-08-29 19:28:27 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:28:27.927757 | orchestrator | 2025-08-29 19:28:27 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:28:27.927782 | orchestrator | 2025-08-29 19:28:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:28:30.993424 | orchestrator | 2025-08-29 19:28:30 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:28:30.994386 | orchestrator | 2025-08-29 19:28:30 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:28:30.995373 | orchestrator | 2025-08-29 19:28:30 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:28:30.995529 | orchestrator | 2025-08-29 19:28:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:28:34.027486 | orchestrator | 2025-08-29 19:28:34 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:28:34.029100 | orchestrator | 2025-08-29 19:28:34 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:28:34.030732 | orchestrator | 2025-08-29 19:28:34 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:28:34.030803 | orchestrator | 2025-08-29 19:28:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:28:37.066832 | orchestrator | 2025-08-29 19:28:37 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:28:37.067417 | orchestrator | 2025-08-29 19:28:37 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:28:37.068570 | orchestrator | 2025-08-29 19:28:37 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:28:37.068606 | orchestrator | 2025-08-29 19:28:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:28:40.106372 | orchestrator | 2025-08-29 19:28:40 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:28:40.107298 | orchestrator | 2025-08-29 19:28:40 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:28:40.109395 | orchestrator | 2025-08-29 19:28:40 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:28:40.109438 | orchestrator | 2025-08-29 19:28:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:28:43.143296 | orchestrator | 2025-08-29 19:28:43 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:28:43.147580 | orchestrator | 2025-08-29 19:28:43 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:28:43.150281 | orchestrator | 2025-08-29 19:28:43 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:28:43.150611 | orchestrator | 2025-08-29 19:28:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:28:46.195740 | orchestrator | 2025-08-29 19:28:46 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:28:46.196689 | orchestrator | 2025-08-29 19:28:46 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:28:46.197792 | orchestrator | 2025-08-29 19:28:46 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:28:46.197824 | orchestrator | 2025-08-29 19:28:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:28:49.249047 | orchestrator | 2025-08-29 19:28:49 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:28:49.250525 | orchestrator | 2025-08-29 19:28:49 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:28:49.252378 | orchestrator | 2025-08-29 19:28:49 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:28:49.252454 | orchestrator | 2025-08-29 19:28:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:28:52.301666 | orchestrator | 2025-08-29 19:28:52 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:28:52.303710 | orchestrator | 2025-08-29 19:28:52 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:28:52.304939 | orchestrator | 2025-08-29 19:28:52 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:28:52.304967 | orchestrator | 2025-08-29 19:28:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:28:55.333750 | orchestrator | 2025-08-29 19:28:55 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:28:55.334802 | orchestrator | 2025-08-29 19:28:55 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:28:55.338219 | orchestrator | 2025-08-29 19:28:55 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:28:55.338249 | orchestrator | 2025-08-29 19:28:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:28:58.375770 | orchestrator | 2025-08-29 19:28:58 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state STARTED 2025-08-29 19:28:58.376011 | orchestrator | 2025-08-29 19:28:58 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:28:58.377144 | orchestrator | 2025-08-29 19:28:58 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:28:58.377241 | orchestrator | 2025-08-29 19:28:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:29:01.443944 | orchestrator | 2025-08-29 19:29:01 | INFO  | Task 296631bb-1f0a-4e0f-a1e2-274f5d633875 is in state SUCCESS 2025-08-29 19:29:01.446943 | orchestrator | 2025-08-29 19:29:01.446994 | orchestrator | 2025-08-29 19:29:01.447008 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 19:29:01.447020 | orchestrator | 2025-08-29 19:29:01.447031 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 19:29:01.447044 | orchestrator | Friday 29 August 2025 19:26:27 +0000 (0:00:00.174) 0:00:00.174 ********* 2025-08-29 19:29:01.447055 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:29:01.447067 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:29:01.447079 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:29:01.447090 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:29:01.447101 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:29:01.447113 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:29:01.447124 | orchestrator | 2025-08-29 19:29:01.447136 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 19:29:01.447147 | orchestrator | Friday 29 August 2025 19:26:28 +0000 (0:00:00.929) 0:00:01.103 ********* 2025-08-29 19:29:01.447159 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-08-29 19:29:01.447197 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-08-29 19:29:01.447210 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-08-29 19:29:01.447221 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-08-29 19:29:01.447231 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-08-29 19:29:01.447242 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-08-29 19:29:01.447253 | orchestrator | 2025-08-29 19:29:01.447264 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-08-29 19:29:01.447274 | orchestrator | 2025-08-29 19:29:01.447285 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-08-29 19:29:01.447296 | orchestrator | Friday 29 August 2025 19:26:29 +0000 (0:00:01.371) 0:00:02.475 ********* 2025-08-29 19:29:01.447308 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:29:01.447319 | orchestrator | 2025-08-29 19:29:01.447330 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-08-29 19:29:01.447341 | orchestrator | Friday 29 August 2025 19:26:31 +0000 (0:00:01.672) 0:00:04.147 ********* 2025-08-29 19:29:01.447354 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.447367 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.447396 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.447416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.447428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.447439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.447450 | orchestrator | 2025-08-29 19:29:01.447475 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-08-29 19:29:01.447487 | orchestrator | Friday 29 August 2025 19:26:32 +0000 (0:00:01.119) 0:00:05.266 ********* 2025-08-29 19:29:01.447498 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.447509 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.447520 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.447532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.447551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.447564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.447577 | orchestrator | 2025-08-29 19:29:01.447590 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-08-29 19:29:01.447602 | orchestrator | Friday 29 August 2025 19:26:34 +0000 (0:00:01.701) 0:00:06.968 ********* 2025-08-29 19:29:01.447619 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.447633 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.447651 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.447663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.447674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.447685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.447696 | orchestrator | 2025-08-29 19:29:01.447707 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-08-29 19:29:01.447724 | orchestrator | Friday 29 August 2025 19:26:35 +0000 (0:00:01.038) 0:00:08.006 ********* 2025-08-29 19:29:01.447735 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.447747 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.447758 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.447774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.447785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.447796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.447807 | orchestrator | 2025-08-29 19:29:01.447823 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-08-29 19:29:01.447835 | orchestrator | Friday 29 August 2025 19:26:36 +0000 (0:00:01.634) 0:00:09.641 ********* 2025-08-29 19:29:01.447845 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.447856 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.447867 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.447884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.447902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.447922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.447939 | orchestrator | 2025-08-29 19:29:01.447970 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-08-29 19:29:01.447989 | orchestrator | Friday 29 August 2025 19:26:38 +0000 (0:00:01.250) 0:00:10.892 ********* 2025-08-29 19:29:01.448007 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:29:01.448025 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:29:01.448042 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:29:01.448060 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:29:01.448079 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:29:01.448096 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:29:01.448115 | orchestrator | 2025-08-29 19:29:01.448142 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-08-29 19:29:01.448161 | orchestrator | Friday 29 August 2025 19:26:40 +0000 (0:00:02.709) 0:00:13.602 ********* 2025-08-29 19:29:01.448203 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-08-29 19:29:01.448214 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-08-29 19:29:01.448225 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-08-29 19:29:01.448236 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-08-29 19:29:01.448247 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-08-29 19:29:01.448258 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-08-29 19:29:01.448269 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 19:29:01.448279 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 19:29:01.448299 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 19:29:01.448311 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 19:29:01.448321 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 19:29:01.448332 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 19:29:01.448353 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 19:29:01.448366 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 19:29:01.448377 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 19:29:01.448387 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 19:29:01.448398 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 19:29:01.448409 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 19:29:01.448420 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 19:29:01.448431 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 19:29:01.448451 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 19:29:01.448469 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 19:29:01.448487 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 19:29:01.448505 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 19:29:01.448522 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 19:29:01.448539 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 19:29:01.448559 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 19:29:01.448578 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 19:29:01.448597 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 19:29:01.448617 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 19:29:01.448629 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 19:29:01.448640 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 19:29:01.448650 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 19:29:01.448661 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 19:29:01.448672 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 19:29:01.448682 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-08-29 19:29:01.448693 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 19:29:01.448710 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-08-29 19:29:01.448721 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-08-29 19:29:01.448732 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-08-29 19:29:01.448742 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-08-29 19:29:01.448760 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-08-29 19:29:01.448771 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-08-29 19:29:01.448782 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-08-29 19:29:01.448800 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-08-29 19:29:01.448811 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-08-29 19:29:01.448822 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-08-29 19:29:01.448833 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-08-29 19:29:01.448961 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-08-29 19:29:01.448975 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-08-29 19:29:01.448987 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-08-29 19:29:01.448998 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-08-29 19:29:01.449009 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-08-29 19:29:01.449020 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-08-29 19:29:01.449030 | orchestrator | 2025-08-29 19:29:01.449042 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 19:29:01.449123 | orchestrator | Friday 29 August 2025 19:27:00 +0000 (0:00:19.341) 0:00:32.943 ********* 2025-08-29 19:29:01.449136 | orchestrator | 2025-08-29 19:29:01.449147 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 19:29:01.449158 | orchestrator | Friday 29 August 2025 19:27:00 +0000 (0:00:00.186) 0:00:33.129 ********* 2025-08-29 19:29:01.449203 | orchestrator | 2025-08-29 19:29:01.449217 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 19:29:01.449228 | orchestrator | Friday 29 August 2025 19:27:00 +0000 (0:00:00.064) 0:00:33.194 ********* 2025-08-29 19:29:01.449239 | orchestrator | 2025-08-29 19:29:01.449250 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 19:29:01.449261 | orchestrator | Friday 29 August 2025 19:27:00 +0000 (0:00:00.064) 0:00:33.258 ********* 2025-08-29 19:29:01.449272 | orchestrator | 2025-08-29 19:29:01.449283 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 19:29:01.449293 | orchestrator | Friday 29 August 2025 19:27:00 +0000 (0:00:00.061) 0:00:33.320 ********* 2025-08-29 19:29:01.449304 | orchestrator | 2025-08-29 19:29:01.449315 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 19:29:01.449326 | orchestrator | Friday 29 August 2025 19:27:00 +0000 (0:00:00.060) 0:00:33.381 ********* 2025-08-29 19:29:01.449336 | orchestrator | 2025-08-29 19:29:01.449347 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-08-29 19:29:01.449358 | orchestrator | Friday 29 August 2025 19:27:00 +0000 (0:00:00.064) 0:00:33.445 ********* 2025-08-29 19:29:01.449369 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:29:01.449380 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:29:01.449391 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:29:01.449411 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:29:01.449422 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:29:01.449433 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:29:01.449443 | orchestrator | 2025-08-29 19:29:01.449454 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-08-29 19:29:01.449465 | orchestrator | Friday 29 August 2025 19:27:02 +0000 (0:00:01.864) 0:00:35.309 ********* 2025-08-29 19:29:01.449476 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:29:01.449487 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:29:01.449498 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:29:01.449508 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:29:01.449519 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:29:01.449530 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:29:01.449540 | orchestrator | 2025-08-29 19:29:01.449551 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-08-29 19:29:01.449562 | orchestrator | 2025-08-29 19:29:01.449573 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-08-29 19:29:01.449584 | orchestrator | Friday 29 August 2025 19:27:42 +0000 (0:00:40.108) 0:01:15.418 ********* 2025-08-29 19:29:01.449595 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:29:01.449606 | orchestrator | 2025-08-29 19:29:01.449617 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-08-29 19:29:01.449631 | orchestrator | Friday 29 August 2025 19:27:43 +0000 (0:00:00.758) 0:01:16.176 ********* 2025-08-29 19:29:01.449655 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:29:01.449682 | orchestrator | 2025-08-29 19:29:01.449701 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-08-29 19:29:01.449719 | orchestrator | Friday 29 August 2025 19:27:43 +0000 (0:00:00.459) 0:01:16.636 ********* 2025-08-29 19:29:01.449737 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:29:01.449758 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:29:01.449779 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:29:01.449799 | orchestrator | 2025-08-29 19:29:01.449818 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-08-29 19:29:01.449833 | orchestrator | Friday 29 August 2025 19:27:44 +0000 (0:00:00.819) 0:01:17.455 ********* 2025-08-29 19:29:01.449845 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:29:01.449858 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:29:01.449870 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:29:01.449891 | orchestrator | 2025-08-29 19:29:01.449905 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-08-29 19:29:01.449917 | orchestrator | Friday 29 August 2025 19:27:44 +0000 (0:00:00.292) 0:01:17.747 ********* 2025-08-29 19:29:01.449930 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:29:01.449942 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:29:01.449955 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:29:01.449966 | orchestrator | 2025-08-29 19:29:01.449979 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-08-29 19:29:01.449992 | orchestrator | Friday 29 August 2025 19:27:45 +0000 (0:00:00.302) 0:01:18.049 ********* 2025-08-29 19:29:01.450005 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:29:01.450063 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:29:01.450078 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:29:01.450091 | orchestrator | 2025-08-29 19:29:01.450103 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-08-29 19:29:01.450117 | orchestrator | Friday 29 August 2025 19:27:45 +0000 (0:00:00.322) 0:01:18.372 ********* 2025-08-29 19:29:01.450129 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:29:01.450141 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:29:01.450152 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:29:01.450162 | orchestrator | 2025-08-29 19:29:01.450190 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-08-29 19:29:01.450211 | orchestrator | Friday 29 August 2025 19:27:46 +0000 (0:00:00.445) 0:01:18.818 ********* 2025-08-29 19:29:01.450223 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:29:01.450234 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:29:01.450245 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:29:01.450255 | orchestrator | 2025-08-29 19:29:01.450266 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-08-29 19:29:01.450277 | orchestrator | Friday 29 August 2025 19:27:46 +0000 (0:00:00.288) 0:01:19.106 ********* 2025-08-29 19:29:01.450288 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:29:01.450298 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:29:01.450309 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:29:01.450320 | orchestrator | 2025-08-29 19:29:01.450331 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-08-29 19:29:01.450341 | orchestrator | Friday 29 August 2025 19:27:46 +0000 (0:00:00.319) 0:01:19.426 ********* 2025-08-29 19:29:01.450352 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:29:01.450363 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:29:01.450374 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:29:01.450385 | orchestrator | 2025-08-29 19:29:01.450395 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-08-29 19:29:01.450406 | orchestrator | Friday 29 August 2025 19:27:46 +0000 (0:00:00.330) 0:01:19.757 ********* 2025-08-29 19:29:01.450417 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:29:01.450427 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:29:01.450438 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:29:01.450449 | orchestrator | 2025-08-29 19:29:01.450460 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-08-29 19:29:01.450471 | orchestrator | Friday 29 August 2025 19:27:47 +0000 (0:00:00.423) 0:01:20.180 ********* 2025-08-29 19:29:01.450481 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:29:01.450492 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:29:01.450503 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:29:01.450513 | orchestrator | 2025-08-29 19:29:01.450524 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-08-29 19:29:01.450535 | orchestrator | Friday 29 August 2025 19:27:47 +0000 (0:00:00.289) 0:01:20.470 ********* 2025-08-29 19:29:01.450546 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:29:01.450556 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:29:01.450567 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:29:01.450578 | orchestrator | 2025-08-29 19:29:01.450589 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-08-29 19:29:01.450600 | orchestrator | Friday 29 August 2025 19:27:47 +0000 (0:00:00.272) 0:01:20.743 ********* 2025-08-29 19:29:01.450610 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:29:01.450621 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:29:01.450631 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:29:01.450642 | orchestrator | 2025-08-29 19:29:01.450679 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-08-29 19:29:01.450691 | orchestrator | Friday 29 August 2025 19:27:48 +0000 (0:00:00.252) 0:01:20.995 ********* 2025-08-29 19:29:01.450702 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:29:01.450712 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:29:01.450723 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:29:01.450734 | orchestrator | 2025-08-29 19:29:01.450745 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-08-29 19:29:01.450756 | orchestrator | Friday 29 August 2025 19:27:48 +0000 (0:00:00.287) 0:01:21.283 ********* 2025-08-29 19:29:01.450776 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:29:01.450787 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:29:01.450798 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:29:01.450809 | orchestrator | 2025-08-29 19:29:01.450820 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-08-29 19:29:01.450831 | orchestrator | Friday 29 August 2025 19:27:48 +0000 (0:00:00.405) 0:01:21.688 ********* 2025-08-29 19:29:01.450849 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:29:01.450860 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:29:01.450871 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:29:01.450882 | orchestrator | 2025-08-29 19:29:01.450892 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-08-29 19:29:01.450904 | orchestrator | Friday 29 August 2025 19:27:49 +0000 (0:00:00.266) 0:01:21.955 ********* 2025-08-29 19:29:01.450914 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:29:01.450925 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:29:01.450936 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:29:01.450946 | orchestrator | 2025-08-29 19:29:01.450958 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-08-29 19:29:01.450968 | orchestrator | Friday 29 August 2025 19:27:49 +0000 (0:00:00.301) 0:01:22.256 ********* 2025-08-29 19:29:01.450980 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:29:01.450990 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:29:01.451009 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:29:01.451021 | orchestrator | 2025-08-29 19:29:01.451032 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-08-29 19:29:01.451042 | orchestrator | Friday 29 August 2025 19:27:49 +0000 (0:00:00.272) 0:01:22.529 ********* 2025-08-29 19:29:01.451053 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:29:01.451064 | orchestrator | 2025-08-29 19:29:01.451075 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-08-29 19:29:01.451086 | orchestrator | Friday 29 August 2025 19:27:50 +0000 (0:00:00.635) 0:01:23.164 ********* 2025-08-29 19:29:01.451097 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:29:01.451108 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:29:01.451119 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:29:01.451129 | orchestrator | 2025-08-29 19:29:01.451141 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-08-29 19:29:01.451152 | orchestrator | Friday 29 August 2025 19:27:50 +0000 (0:00:00.397) 0:01:23.562 ********* 2025-08-29 19:29:01.451163 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:29:01.451187 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:29:01.451199 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:29:01.451210 | orchestrator | 2025-08-29 19:29:01.451221 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-08-29 19:29:01.451232 | orchestrator | Friday 29 August 2025 19:27:51 +0000 (0:00:00.392) 0:01:23.954 ********* 2025-08-29 19:29:01.451242 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:29:01.451253 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:29:01.451264 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:29:01.451275 | orchestrator | 2025-08-29 19:29:01.451286 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-08-29 19:29:01.451296 | orchestrator | Friday 29 August 2025 19:27:51 +0000 (0:00:00.417) 0:01:24.371 ********* 2025-08-29 19:29:01.451307 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:29:01.451318 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:29:01.451329 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:29:01.451340 | orchestrator | 2025-08-29 19:29:01.451350 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-08-29 19:29:01.451361 | orchestrator | Friday 29 August 2025 19:27:51 +0000 (0:00:00.322) 0:01:24.693 ********* 2025-08-29 19:29:01.451372 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:29:01.451383 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:29:01.451393 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:29:01.451404 | orchestrator | 2025-08-29 19:29:01.451415 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-08-29 19:29:01.451426 | orchestrator | Friday 29 August 2025 19:27:52 +0000 (0:00:00.285) 0:01:24.979 ********* 2025-08-29 19:29:01.451437 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:29:01.451454 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:29:01.451465 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:29:01.451476 | orchestrator | 2025-08-29 19:29:01.451487 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-08-29 19:29:01.451498 | orchestrator | Friday 29 August 2025 19:27:52 +0000 (0:00:00.299) 0:01:25.278 ********* 2025-08-29 19:29:01.451508 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:29:01.451519 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:29:01.451530 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:29:01.451541 | orchestrator | 2025-08-29 19:29:01.451552 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-08-29 19:29:01.451563 | orchestrator | Friday 29 August 2025 19:27:52 +0000 (0:00:00.402) 0:01:25.681 ********* 2025-08-29 19:29:01.451573 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:29:01.451584 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:29:01.451595 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:29:01.451606 | orchestrator | 2025-08-29 19:29:01.451616 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-08-29 19:29:01.451627 | orchestrator | Friday 29 August 2025 19:27:53 +0000 (0:00:00.287) 0:01:25.969 ********* 2025-08-29 19:29:01.451639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.451664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.451677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.451695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.451708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.451803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.451816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.451837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.451848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.451860 | orchestrator | 2025-08-29 19:29:01.451871 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-08-29 19:29:01.451882 | orchestrator | Friday 29 August 2025 19:27:54 +0000 (0:00:01.391) 0:01:27.360 ********* 2025-08-29 19:29:01.451893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.451905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.451921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.451933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.451951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.451963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.451974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.451991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.452003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.452018 | orchestrator | 2025-08-29 19:29:01.452038 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-08-29 19:29:01.452066 | orchestrator | Friday 29 August 2025 19:27:59 +0000 (0:00:04.540) 0:01:31.900 ********* 2025-08-29 19:29:01.452090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.452109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.452129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.452207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.452267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.452302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.452316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.452337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.452368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.452380 | orchestrator | 2025-08-29 19:29:01.452391 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 19:29:01.452402 | orchestrator | Friday 29 August 2025 19:28:01 +0000 (0:00:02.062) 0:01:33.963 ********* 2025-08-29 19:29:01.452413 | orchestrator | 2025-08-29 19:29:01.452424 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 19:29:01.452435 | orchestrator | Friday 29 August 2025 19:28:01 +0000 (0:00:00.212) 0:01:34.176 ********* 2025-08-29 19:29:01.452446 | orchestrator | 2025-08-29 19:29:01.452457 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 19:29:01.452468 | orchestrator | Friday 29 August 2025 19:28:01 +0000 (0:00:00.061) 0:01:34.237 ********* 2025-08-29 19:29:01.452479 | orchestrator | 2025-08-29 19:29:01.452490 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-08-29 19:29:01.452500 | orchestrator | Friday 29 August 2025 19:28:01 +0000 (0:00:00.066) 0:01:34.303 ********* 2025-08-29 19:29:01.452511 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:29:01.452522 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:29:01.452533 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:29:01.452544 | orchestrator | 2025-08-29 19:29:01.452555 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-08-29 19:29:01.452565 | orchestrator | Friday 29 August 2025 19:28:09 +0000 (0:00:07.685) 0:01:41.988 ********* 2025-08-29 19:29:01.452576 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:29:01.452587 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:29:01.452598 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:29:01.452609 | orchestrator | 2025-08-29 19:29:01.452620 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-08-29 19:29:01.452631 | orchestrator | Friday 29 August 2025 19:28:16 +0000 (0:00:07.759) 0:01:49.748 ********* 2025-08-29 19:29:01.452642 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:29:01.452652 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:29:01.452663 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:29:01.452674 | orchestrator | 2025-08-29 19:29:01.452684 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-08-29 19:29:01.452695 | orchestrator | Friday 29 August 2025 19:28:19 +0000 (0:00:02.946) 0:01:52.694 ********* 2025-08-29 19:29:01.452706 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:29:01.452717 | orchestrator | 2025-08-29 19:29:01.452728 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-08-29 19:29:01.452739 | orchestrator | Friday 29 August 2025 19:28:20 +0000 (0:00:00.129) 0:01:52.824 ********* 2025-08-29 19:29:01.452749 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:29:01.452761 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:29:01.452771 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:29:01.452782 | orchestrator | 2025-08-29 19:29:01.452793 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-08-29 19:29:01.452811 | orchestrator | Friday 29 August 2025 19:28:21 +0000 (0:00:01.171) 0:01:53.995 ********* 2025-08-29 19:29:01.452822 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:29:01.452832 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:29:01.452849 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:29:01.452860 | orchestrator | 2025-08-29 19:29:01.452871 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-08-29 19:29:01.452882 | orchestrator | Friday 29 August 2025 19:28:21 +0000 (0:00:00.736) 0:01:54.731 ********* 2025-08-29 19:29:01.452893 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:29:01.452904 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:29:01.452914 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:29:01.452925 | orchestrator | 2025-08-29 19:29:01.452936 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-08-29 19:29:01.452947 | orchestrator | Friday 29 August 2025 19:28:22 +0000 (0:00:00.778) 0:01:55.510 ********* 2025-08-29 19:29:01.452958 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:29:01.452969 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:29:01.452980 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:29:01.452991 | orchestrator | 2025-08-29 19:29:01.453002 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-08-29 19:29:01.453013 | orchestrator | Friday 29 August 2025 19:28:23 +0000 (0:00:00.675) 0:01:56.185 ********* 2025-08-29 19:29:01.453024 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:29:01.453035 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:29:01.453052 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:29:01.453064 | orchestrator | 2025-08-29 19:29:01.453075 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-08-29 19:29:01.453086 | orchestrator | Friday 29 August 2025 19:28:24 +0000 (0:00:01.004) 0:01:57.189 ********* 2025-08-29 19:29:01.453097 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:29:01.453108 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:29:01.453119 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:29:01.453130 | orchestrator | 2025-08-29 19:29:01.453141 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-08-29 19:29:01.453152 | orchestrator | Friday 29 August 2025 19:28:25 +0000 (0:00:00.753) 0:01:57.943 ********* 2025-08-29 19:29:01.453163 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:29:01.453328 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:29:01.453348 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:29:01.453360 | orchestrator | 2025-08-29 19:29:01.453371 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-08-29 19:29:01.453382 | orchestrator | Friday 29 August 2025 19:28:25 +0000 (0:00:00.326) 0:01:58.270 ********* 2025-08-29 19:29:01.453394 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.453406 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.453417 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.453428 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.453452 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.453464 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.453482 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.453494 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.453518 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.453530 | orchestrator | 2025-08-29 19:29:01.453541 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-08-29 19:29:01.453552 | orchestrator | Friday 29 August 2025 19:28:26 +0000 (0:00:01.393) 0:01:59.663 ********* 2025-08-29 19:29:01.453564 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.453575 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.453587 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.453596 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.453605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.453618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.453639 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.453659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.453668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.453676 | orchestrator | 2025-08-29 19:29:01.453684 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-08-29 19:29:01.453692 | orchestrator | Friday 29 August 2025 19:28:31 +0000 (0:00:04.180) 0:02:03.843 ********* 2025-08-29 19:29:01.453705 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.453714 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.453722 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.453731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.453739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.453756 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.453764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.453772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.453784 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:29:01.453792 | orchestrator | 2025-08-29 19:29:01.453801 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 19:29:01.453809 | orchestrator | Friday 29 August 2025 19:28:33 +0000 (0:00:02.947) 0:02:06.791 ********* 2025-08-29 19:29:01.453817 | orchestrator | 2025-08-29 19:29:01.453825 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 19:29:01.453833 | orchestrator | Friday 29 August 2025 19:28:34 +0000 (0:00:00.069) 0:02:06.860 ********* 2025-08-29 19:29:01.453841 | orchestrator | 2025-08-29 19:29:01.453848 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 19:29:01.453856 | orchestrator | Friday 29 August 2025 19:28:34 +0000 (0:00:00.070) 0:02:06.930 ********* 2025-08-29 19:29:01.453864 | orchestrator | 2025-08-29 19:29:01.453872 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-08-29 19:29:01.453880 | orchestrator | Friday 29 August 2025 19:28:34 +0000 (0:00:00.077) 0:02:07.008 ********* 2025-08-29 19:29:01.453888 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:29:01.453896 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:29:01.453904 | orchestrator | 2025-08-29 19:29:01.453916 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-08-29 19:29:01.453925 | orchestrator | Friday 29 August 2025 19:28:40 +0000 (0:00:06.258) 0:02:13.267 ********* 2025-08-29 19:29:01.453933 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:29:01.453941 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:29:01.453949 | orchestrator | 2025-08-29 19:29:01.453957 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-08-29 19:29:01.453965 | orchestrator | Friday 29 August 2025 19:28:46 +0000 (0:00:06.124) 0:02:19.391 ********* 2025-08-29 19:29:01.453973 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:29:01.453981 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:29:01.453989 | orchestrator | 2025-08-29 19:29:01.453996 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-08-29 19:29:01.454005 | orchestrator | Friday 29 August 2025 19:28:53 +0000 (0:00:06.693) 0:02:26.085 ********* 2025-08-29 19:29:01.454042 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:29:01.454052 | orchestrator | 2025-08-29 19:29:01.454061 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-08-29 19:29:01.454068 | orchestrator | Friday 29 August 2025 19:28:53 +0000 (0:00:00.134) 0:02:26.220 ********* 2025-08-29 19:29:01.454076 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:29:01.454084 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:29:01.454092 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:29:01.454100 | orchestrator | 2025-08-29 19:29:01.454108 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-08-29 19:29:01.454116 | orchestrator | Friday 29 August 2025 19:28:54 +0000 (0:00:00.811) 0:02:27.032 ********* 2025-08-29 19:29:01.454123 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:29:01.454131 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:29:01.454139 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:29:01.454147 | orchestrator | 2025-08-29 19:29:01.454154 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-08-29 19:29:01.454162 | orchestrator | Friday 29 August 2025 19:28:54 +0000 (0:00:00.630) 0:02:27.662 ********* 2025-08-29 19:29:01.454185 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:29:01.454193 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:29:01.454201 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:29:01.454209 | orchestrator | 2025-08-29 19:29:01.454217 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-08-29 19:29:01.454225 | orchestrator | Friday 29 August 2025 19:28:55 +0000 (0:00:00.762) 0:02:28.425 ********* 2025-08-29 19:29:01.454232 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:29:01.454240 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:29:01.454248 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:29:01.454256 | orchestrator | 2025-08-29 19:29:01.454264 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-08-29 19:29:01.454272 | orchestrator | Friday 29 August 2025 19:28:56 +0000 (0:00:00.781) 0:02:29.206 ********* 2025-08-29 19:29:01.454279 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:29:01.454287 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:29:01.454295 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:29:01.454303 | orchestrator | 2025-08-29 19:29:01.454311 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-08-29 19:29:01.454318 | orchestrator | Friday 29 August 2025 19:28:57 +0000 (0:00:00.808) 0:02:30.015 ********* 2025-08-29 19:29:01.454326 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:29:01.454334 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:29:01.454342 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:29:01.454349 | orchestrator | 2025-08-29 19:29:01.454357 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:29:01.454365 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-08-29 19:29:01.454373 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-08-29 19:29:01.454381 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-08-29 19:29:01.454389 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:29:01.454397 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:29:01.454409 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:29:01.454417 | orchestrator | 2025-08-29 19:29:01.454425 | orchestrator | 2025-08-29 19:29:01.454438 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:29:01.454446 | orchestrator | Friday 29 August 2025 19:28:58 +0000 (0:00:00.856) 0:02:30.871 ********* 2025-08-29 19:29:01.454454 | orchestrator | =============================================================================== 2025-08-29 19:29:01.454461 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 40.11s 2025-08-29 19:29:01.454469 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.34s 2025-08-29 19:29:01.454477 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.94s 2025-08-29 19:29:01.454485 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.88s 2025-08-29 19:29:01.454493 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 9.64s 2025-08-29 19:29:01.454501 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.54s 2025-08-29 19:29:01.454509 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.18s 2025-08-29 19:29:01.454522 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.95s 2025-08-29 19:29:01.454530 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.71s 2025-08-29 19:29:01.454538 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.06s 2025-08-29 19:29:01.454546 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.86s 2025-08-29 19:29:01.454554 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.70s 2025-08-29 19:29:01.454561 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.67s 2025-08-29 19:29:01.454569 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.63s 2025-08-29 19:29:01.454577 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.39s 2025-08-29 19:29:01.454585 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.39s 2025-08-29 19:29:01.454593 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.37s 2025-08-29 19:29:01.454601 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.25s 2025-08-29 19:29:01.454609 | orchestrator | ovn-db : Get OVN_Northbound cluster leader ------------------------------ 1.17s 2025-08-29 19:29:01.454617 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.12s 2025-08-29 19:29:01.454625 | orchestrator | 2025-08-29 19:29:01 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:29:01.454633 | orchestrator | 2025-08-29 19:29:01 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:29:01.454641 | orchestrator | 2025-08-29 19:29:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:29:04.500028 | orchestrator | 2025-08-29 19:29:04 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:29:04.502505 | orchestrator | 2025-08-29 19:29:04 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:29:04.502550 | orchestrator | 2025-08-29 19:29:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:29:07.547787 | orchestrator | 2025-08-29 19:29:07 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:29:07.549101 | orchestrator | 2025-08-29 19:29:07 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:29:07.549355 | orchestrator | 2025-08-29 19:29:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:29:10.596419 | orchestrator | 2025-08-29 19:29:10 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:29:10.599947 | orchestrator | 2025-08-29 19:29:10 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:29:10.600034 | orchestrator | 2025-08-29 19:29:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:29:13.666886 | orchestrator | 2025-08-29 19:29:13 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:29:13.668639 | orchestrator | 2025-08-29 19:29:13 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:29:13.668684 | orchestrator | 2025-08-29 19:29:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:29:16.716006 | orchestrator | 2025-08-29 19:29:16 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:29:16.717263 | orchestrator | 2025-08-29 19:29:16 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:29:16.717405 | orchestrator | 2025-08-29 19:29:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:29:19.757220 | orchestrator | 2025-08-29 19:29:19 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:29:19.758259 | orchestrator | 2025-08-29 19:29:19 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:29:19.758640 | orchestrator | 2025-08-29 19:29:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:29:22.798303 | orchestrator | 2025-08-29 19:29:22 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:29:22.798563 | orchestrator | 2025-08-29 19:29:22 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:29:22.798710 | orchestrator | 2025-08-29 19:29:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:29:25.837381 | orchestrator | 2025-08-29 19:29:25 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:29:25.837651 | orchestrator | 2025-08-29 19:29:25 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:29:25.837673 | orchestrator | 2025-08-29 19:29:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:29:28.878927 | orchestrator | 2025-08-29 19:29:28 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:29:28.879312 | orchestrator | 2025-08-29 19:29:28 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:29:28.879357 | orchestrator | 2025-08-29 19:29:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:29:31.916821 | orchestrator | 2025-08-29 19:29:31 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:29:31.916913 | orchestrator | 2025-08-29 19:29:31 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:29:31.916928 | orchestrator | 2025-08-29 19:29:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:29:34.956903 | orchestrator | 2025-08-29 19:29:34 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:29:34.958803 | orchestrator | 2025-08-29 19:29:34 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:29:34.958827 | orchestrator | 2025-08-29 19:29:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:29:38.028781 | orchestrator | 2025-08-29 19:29:38 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:29:38.028895 | orchestrator | 2025-08-29 19:29:38 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:29:38.028921 | orchestrator | 2025-08-29 19:29:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:29:41.049004 | orchestrator | 2025-08-29 19:29:41 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:29:41.049953 | orchestrator | 2025-08-29 19:29:41 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:29:41.050397 | orchestrator | 2025-08-29 19:29:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:29:44.104361 | orchestrator | 2025-08-29 19:29:44 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:29:44.106571 | orchestrator | 2025-08-29 19:29:44 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:29:44.106623 | orchestrator | 2025-08-29 19:29:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:29:47.137667 | orchestrator | 2025-08-29 19:29:47 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:29:47.138508 | orchestrator | 2025-08-29 19:29:47 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:29:47.138566 | orchestrator | 2025-08-29 19:29:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:29:50.171979 | orchestrator | 2025-08-29 19:29:50 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:29:50.172070 | orchestrator | 2025-08-29 19:29:50 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:29:50.172506 | orchestrator | 2025-08-29 19:29:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:29:53.220487 | orchestrator | 2025-08-29 19:29:53 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:29:53.220743 | orchestrator | 2025-08-29 19:29:53 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:29:53.220768 | orchestrator | 2025-08-29 19:29:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:29:56.262214 | orchestrator | 2025-08-29 19:29:56 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:29:56.263903 | orchestrator | 2025-08-29 19:29:56 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:29:56.264807 | orchestrator | 2025-08-29 19:29:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:29:59.311664 | orchestrator | 2025-08-29 19:29:59 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:29:59.312114 | orchestrator | 2025-08-29 19:29:59 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:29:59.312149 | orchestrator | 2025-08-29 19:29:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:30:02.363184 | orchestrator | 2025-08-29 19:30:02 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:30:02.364958 | orchestrator | 2025-08-29 19:30:02 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:30:02.365104 | orchestrator | 2025-08-29 19:30:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:30:05.403847 | orchestrator | 2025-08-29 19:30:05 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:30:05.408970 | orchestrator | 2025-08-29 19:30:05 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:30:05.409040 | orchestrator | 2025-08-29 19:30:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:30:08.459395 | orchestrator | 2025-08-29 19:30:08 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:30:08.461860 | orchestrator | 2025-08-29 19:30:08 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:30:08.461886 | orchestrator | 2025-08-29 19:30:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:30:11.508875 | orchestrator | 2025-08-29 19:30:11 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:30:11.510574 | orchestrator | 2025-08-29 19:30:11 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:30:11.510656 | orchestrator | 2025-08-29 19:30:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:30:14.574965 | orchestrator | 2025-08-29 19:30:14 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:30:14.575146 | orchestrator | 2025-08-29 19:30:14 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:30:14.575165 | orchestrator | 2025-08-29 19:30:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:30:17.619703 | orchestrator | 2025-08-29 19:30:17 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:30:17.621340 | orchestrator | 2025-08-29 19:30:17 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:30:17.621389 | orchestrator | 2025-08-29 19:30:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:30:20.670658 | orchestrator | 2025-08-29 19:30:20 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:30:20.670748 | orchestrator | 2025-08-29 19:30:20 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:30:20.670758 | orchestrator | 2025-08-29 19:30:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:30:23.715988 | orchestrator | 2025-08-29 19:30:23 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:30:23.717189 | orchestrator | 2025-08-29 19:30:23 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:30:23.717642 | orchestrator | 2025-08-29 19:30:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:30:26.770548 | orchestrator | 2025-08-29 19:30:26 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:30:26.770805 | orchestrator | 2025-08-29 19:30:26 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:30:26.770831 | orchestrator | 2025-08-29 19:30:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:30:29.820613 | orchestrator | 2025-08-29 19:30:29 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:30:29.820724 | orchestrator | 2025-08-29 19:30:29 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:30:29.820739 | orchestrator | 2025-08-29 19:30:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:30:32.862640 | orchestrator | 2025-08-29 19:30:32 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:30:32.863681 | orchestrator | 2025-08-29 19:30:32 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:30:32.863728 | orchestrator | 2025-08-29 19:30:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:30:35.906962 | orchestrator | 2025-08-29 19:30:35 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:30:35.907432 | orchestrator | 2025-08-29 19:30:35 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:30:35.907470 | orchestrator | 2025-08-29 19:30:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:30:38.947834 | orchestrator | 2025-08-29 19:30:38 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:30:38.948419 | orchestrator | 2025-08-29 19:30:38 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:30:38.948450 | orchestrator | 2025-08-29 19:30:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:30:41.991802 | orchestrator | 2025-08-29 19:30:41 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:30:41.991888 | orchestrator | 2025-08-29 19:30:41 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:30:41.991903 | orchestrator | 2025-08-29 19:30:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:30:45.046937 | orchestrator | 2025-08-29 19:30:45 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:30:45.047085 | orchestrator | 2025-08-29 19:30:45 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:30:45.047103 | orchestrator | 2025-08-29 19:30:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:30:48.101159 | orchestrator | 2025-08-29 19:30:48 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:30:48.102750 | orchestrator | 2025-08-29 19:30:48 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:30:48.102817 | orchestrator | 2025-08-29 19:30:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:30:51.142493 | orchestrator | 2025-08-29 19:30:51 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:30:51.143219 | orchestrator | 2025-08-29 19:30:51 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:30:51.143274 | orchestrator | 2025-08-29 19:30:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:30:54.186483 | orchestrator | 2025-08-29 19:30:54 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:30:54.186889 | orchestrator | 2025-08-29 19:30:54 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:30:54.186919 | orchestrator | 2025-08-29 19:30:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:30:57.235479 | orchestrator | 2025-08-29 19:30:57 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:30:57.236397 | orchestrator | 2025-08-29 19:30:57 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:30:57.236546 | orchestrator | 2025-08-29 19:30:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:31:00.278601 | orchestrator | 2025-08-29 19:31:00 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:31:00.278852 | orchestrator | 2025-08-29 19:31:00 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:31:00.278871 | orchestrator | 2025-08-29 19:31:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:31:03.322350 | orchestrator | 2025-08-29 19:31:03 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:31:03.322458 | orchestrator | 2025-08-29 19:31:03 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:31:03.322475 | orchestrator | 2025-08-29 19:31:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:31:06.361921 | orchestrator | 2025-08-29 19:31:06 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:31:06.362580 | orchestrator | 2025-08-29 19:31:06 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:31:06.362613 | orchestrator | 2025-08-29 19:31:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:31:09.407306 | orchestrator | 2025-08-29 19:31:09 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:31:09.410296 | orchestrator | 2025-08-29 19:31:09 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:31:09.410414 | orchestrator | 2025-08-29 19:31:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:31:12.456447 | orchestrator | 2025-08-29 19:31:12 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:31:12.460479 | orchestrator | 2025-08-29 19:31:12 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:31:12.460550 | orchestrator | 2025-08-29 19:31:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:31:15.506142 | orchestrator | 2025-08-29 19:31:15 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:31:15.507540 | orchestrator | 2025-08-29 19:31:15 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:31:15.507570 | orchestrator | 2025-08-29 19:31:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:31:18.550345 | orchestrator | 2025-08-29 19:31:18 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:31:18.552372 | orchestrator | 2025-08-29 19:31:18 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:31:18.552513 | orchestrator | 2025-08-29 19:31:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:31:21.594094 | orchestrator | 2025-08-29 19:31:21 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:31:21.597308 | orchestrator | 2025-08-29 19:31:21 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:31:21.597351 | orchestrator | 2025-08-29 19:31:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:31:24.646216 | orchestrator | 2025-08-29 19:31:24 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:31:24.648815 | orchestrator | 2025-08-29 19:31:24 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:31:24.648887 | orchestrator | 2025-08-29 19:31:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:31:27.701195 | orchestrator | 2025-08-29 19:31:27 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:31:27.702339 | orchestrator | 2025-08-29 19:31:27 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:31:27.702382 | orchestrator | 2025-08-29 19:31:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:31:30.757212 | orchestrator | 2025-08-29 19:31:30 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:31:30.759482 | orchestrator | 2025-08-29 19:31:30 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state STARTED 2025-08-29 19:31:30.759526 | orchestrator | 2025-08-29 19:31:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:31:33.804344 | orchestrator | 2025-08-29 19:31:33 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:31:33.811563 | orchestrator | 2025-08-29 19:31:33 | INFO  | Task 070b2157-5d0f-4ad6-a841-45e6ed09ebaf is in state SUCCESS 2025-08-29 19:31:33.813405 | orchestrator | 2025-08-29 19:31:33.813456 | orchestrator | 2025-08-29 19:31:33.813476 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 19:31:33.814997 | orchestrator | 2025-08-29 19:31:33.815035 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 19:31:33.815150 | orchestrator | Friday 29 August 2025 19:25:13 +0000 (0:00:00.350) 0:00:00.350 ********* 2025-08-29 19:31:33.815162 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:31:33.815174 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:31:33.815183 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:31:33.815216 | orchestrator | 2025-08-29 19:31:33.815227 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 19:31:33.815237 | orchestrator | Friday 29 August 2025 19:25:14 +0000 (0:00:00.376) 0:00:00.727 ********* 2025-08-29 19:31:33.815247 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-08-29 19:31:33.815258 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-08-29 19:31:33.815268 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-08-29 19:31:33.815278 | orchestrator | 2025-08-29 19:31:33.815287 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-08-29 19:31:33.815297 | orchestrator | 2025-08-29 19:31:33.815306 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-08-29 19:31:33.815316 | orchestrator | Friday 29 August 2025 19:25:14 +0000 (0:00:00.462) 0:00:01.189 ********* 2025-08-29 19:31:33.815326 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:31:33.815431 | orchestrator | 2025-08-29 19:31:33.815443 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-08-29 19:31:33.815453 | orchestrator | Friday 29 August 2025 19:25:15 +0000 (0:00:00.867) 0:00:02.057 ********* 2025-08-29 19:31:33.815462 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:31:33.815472 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:31:33.815481 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:31:33.815491 | orchestrator | 2025-08-29 19:31:33.815501 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-08-29 19:31:33.815511 | orchestrator | Friday 29 August 2025 19:25:17 +0000 (0:00:01.914) 0:00:03.972 ********* 2025-08-29 19:31:33.815531 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:31:33.815568 | orchestrator | 2025-08-29 19:31:33.815579 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-08-29 19:31:33.815588 | orchestrator | Friday 29 August 2025 19:25:18 +0000 (0:00:00.662) 0:00:04.634 ********* 2025-08-29 19:31:33.815598 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:31:33.815608 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:31:33.815617 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:31:33.815627 | orchestrator | 2025-08-29 19:31:33.815637 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-08-29 19:31:33.815646 | orchestrator | Friday 29 August 2025 19:25:19 +0000 (0:00:00.863) 0:00:05.498 ********* 2025-08-29 19:31:33.815656 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-08-29 19:31:33.815665 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-08-29 19:31:33.815675 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-08-29 19:31:33.815684 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-08-29 19:31:33.815694 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-08-29 19:31:33.815703 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-08-29 19:31:33.815713 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-08-29 19:31:33.815723 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-08-29 19:31:33.815732 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-08-29 19:31:33.815742 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-08-29 19:31:33.815751 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-08-29 19:31:33.815760 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-08-29 19:31:33.815770 | orchestrator | 2025-08-29 19:31:33.815787 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-08-29 19:31:33.815796 | orchestrator | Friday 29 August 2025 19:25:21 +0000 (0:00:02.372) 0:00:07.870 ********* 2025-08-29 19:31:33.815806 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-08-29 19:31:33.815816 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-08-29 19:31:33.815826 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-08-29 19:31:33.815835 | orchestrator | 2025-08-29 19:31:33.815845 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-08-29 19:31:33.815855 | orchestrator | Friday 29 August 2025 19:25:22 +0000 (0:00:00.788) 0:00:08.659 ********* 2025-08-29 19:31:33.815864 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-08-29 19:31:33.815874 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-08-29 19:31:33.815883 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-08-29 19:31:33.815893 | orchestrator | 2025-08-29 19:31:33.815902 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-08-29 19:31:33.815912 | orchestrator | Friday 29 August 2025 19:25:23 +0000 (0:00:01.720) 0:00:10.379 ********* 2025-08-29 19:31:33.815921 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-08-29 19:31:33.815952 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.815977 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-08-29 19:31:33.815988 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.815997 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-08-29 19:31:33.816007 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.816016 | orchestrator | 2025-08-29 19:31:33.816026 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-08-29 19:31:33.816035 | orchestrator | Friday 29 August 2025 19:25:24 +0000 (0:00:00.822) 0:00:11.201 ********* 2025-08-29 19:31:33.816048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-08-29 19:31:33.816070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-08-29 19:31:33.816080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-08-29 19:31:33.816091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 19:31:33.816108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 19:31:33.816126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 19:31:33.816137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 19:31:33.816147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 19:31:33.816157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 19:31:33.816167 | orchestrator | 2025-08-29 19:31:33.816177 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-08-29 19:31:33.816187 | orchestrator | Friday 29 August 2025 19:25:27 +0000 (0:00:02.712) 0:00:13.914 ********* 2025-08-29 19:31:33.816197 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:31:33.816206 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:31:33.816216 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:31:33.816225 | orchestrator | 2025-08-29 19:31:33.816440 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-08-29 19:31:33.816453 | orchestrator | Friday 29 August 2025 19:25:28 +0000 (0:00:01.189) 0:00:15.103 ********* 2025-08-29 19:31:33.816469 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-08-29 19:31:33.816479 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-08-29 19:31:33.816489 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-08-29 19:31:33.816498 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-08-29 19:31:33.816508 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-08-29 19:31:33.816517 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-08-29 19:31:33.816526 | orchestrator | 2025-08-29 19:31:33.816536 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-08-29 19:31:33.816546 | orchestrator | Friday 29 August 2025 19:25:30 +0000 (0:00:01.886) 0:00:16.989 ********* 2025-08-29 19:31:33.816555 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:31:33.816565 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:31:33.816574 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:31:33.816583 | orchestrator | 2025-08-29 19:31:33.816593 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-08-29 19:31:33.816602 | orchestrator | Friday 29 August 2025 19:25:31 +0000 (0:00:01.333) 0:00:18.322 ********* 2025-08-29 19:31:33.816612 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:31:33.816621 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:31:33.816631 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:31:33.816640 | orchestrator | 2025-08-29 19:31:33.816650 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-08-29 19:31:33.816659 | orchestrator | Friday 29 August 2025 19:25:33 +0000 (0:00:01.632) 0:00:19.955 ********* 2025-08-29 19:31:33.816670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 19:31:33.816698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 19:31:33.816709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 19:31:33.816746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8d67c4746da2a3b7c15847dc56250caab5993dec', '__omit_place_holder__8d67c4746da2a3b7c15847dc56250caab5993dec'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 19:31:33.816765 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.816779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 19:31:33.816790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 19:31:33.816800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 19:31:33.816810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8d67c4746da2a3b7c15847dc56250caab5993dec', '__omit_place_holder__8d67c4746da2a3b7c15847dc56250caab5993dec'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 19:31:33.816820 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.816838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 19:31:33.816849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 19:31:33.816874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 19:31:33.816885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8d67c4746da2a3b7c15847dc56250caab5993dec', '__omit_place_holder__8d67c4746da2a3b7c15847dc56250caab5993dec'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 19:31:33.816895 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.816905 | orchestrator | 2025-08-29 19:31:33.816915 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-08-29 19:31:33.816939 | orchestrator | Friday 29 August 2025 19:25:35 +0000 (0:00:01.737) 0:00:21.693 ********* 2025-08-29 19:31:33.816950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-08-29 19:31:33.816960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-08-29 19:31:33.816985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-08-29 19:31:33.816996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 19:31:33.817012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 19:31:33.817027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 19:31:33.817037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8d67c4746da2a3b7c15847dc56250caab5993dec', '__omit_place_holder__8d67c4746da2a3b7c15847dc56250caab5993dec'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 19:31:33.817047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 19:31:33.817058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 19:31:33.817073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8d67c4746da2a3b7c15847dc56250caab5993dec', '__omit_place_holder__8d67c4746da2a3b7c15847dc56250caab5993dec'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 19:31:33.817084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 19:31:33.817108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8d67c4746da2a3b7c15847dc56250caab5993dec', '__omit_place_holder__8d67c4746da2a3b7c15847dc56250caab5993dec'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 19:31:33.817118 | orchestrator | 2025-08-29 19:31:33.817128 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-08-29 19:31:33.817138 | orchestrator | Friday 29 August 2025 19:25:39 +0000 (0:00:04.232) 0:00:25.925 ********* 2025-08-29 19:31:33.817148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-08-29 19:31:33.817158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-08-29 19:31:33.817168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-08-29 19:31:33.817184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 19:31:33.817200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 19:31:33.817214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 19:31:33.817224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 19:31:33.817234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 19:31:33.817244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 19:31:33.817254 | orchestrator | 2025-08-29 19:31:33.817264 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-08-29 19:31:33.817274 | orchestrator | Friday 29 August 2025 19:25:42 +0000 (0:00:03.359) 0:00:29.284 ********* 2025-08-29 19:31:33.817284 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-08-29 19:31:33.817294 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-08-29 19:31:33.817304 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-08-29 19:31:33.817313 | orchestrator | 2025-08-29 19:31:33.817323 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-08-29 19:31:33.817333 | orchestrator | Friday 29 August 2025 19:25:45 +0000 (0:00:02.839) 0:00:32.124 ********* 2025-08-29 19:31:33.817343 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-08-29 19:31:33.817352 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-08-29 19:31:33.817367 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-08-29 19:31:33.817377 | orchestrator | 2025-08-29 19:31:33.817396 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-08-29 19:31:33.817406 | orchestrator | Friday 29 August 2025 19:25:51 +0000 (0:00:06.113) 0:00:38.237 ********* 2025-08-29 19:31:33.817416 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.817425 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.817435 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.817444 | orchestrator | 2025-08-29 19:31:33.817454 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-08-29 19:31:33.817464 | orchestrator | Friday 29 August 2025 19:25:52 +0000 (0:00:01.024) 0:00:39.262 ********* 2025-08-29 19:31:33.817474 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-08-29 19:31:33.817484 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-08-29 19:31:33.817493 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-08-29 19:31:33.817503 | orchestrator | 2025-08-29 19:31:33.817513 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-08-29 19:31:33.817522 | orchestrator | Friday 29 August 2025 19:25:55 +0000 (0:00:02.743) 0:00:42.006 ********* 2025-08-29 19:31:33.817532 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-08-29 19:31:33.817542 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-08-29 19:31:33.817552 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-08-29 19:31:33.817561 | orchestrator | 2025-08-29 19:31:33.817571 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-08-29 19:31:33.817581 | orchestrator | Friday 29 August 2025 19:25:59 +0000 (0:00:03.512) 0:00:45.519 ********* 2025-08-29 19:31:33.817594 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-08-29 19:31:33.817604 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-08-29 19:31:33.817614 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-08-29 19:31:33.817623 | orchestrator | 2025-08-29 19:31:33.817736 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-08-29 19:31:33.817749 | orchestrator | Friday 29 August 2025 19:26:01 +0000 (0:00:02.210) 0:00:47.729 ********* 2025-08-29 19:31:33.817759 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-08-29 19:31:33.817768 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-08-29 19:31:33.817778 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-08-29 19:31:33.817787 | orchestrator | 2025-08-29 19:31:33.817797 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-08-29 19:31:33.817807 | orchestrator | Friday 29 August 2025 19:26:03 +0000 (0:00:02.480) 0:00:50.209 ********* 2025-08-29 19:31:33.817816 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:31:33.817826 | orchestrator | 2025-08-29 19:31:33.817835 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-08-29 19:31:33.817845 | orchestrator | Friday 29 August 2025 19:26:05 +0000 (0:00:01.593) 0:00:51.802 ********* 2025-08-29 19:31:33.817855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-08-29 19:31:33.817871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-08-29 19:31:33.817887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-08-29 19:31:33.817898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 19:31:33.817913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 19:31:33.817924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 19:31:33.817948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 19:31:33.817963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 19:31:33.817974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 19:31:33.817984 | orchestrator | 2025-08-29 19:31:33.818001 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-08-29 19:31:33.818082 | orchestrator | Friday 29 August 2025 19:26:10 +0000 (0:00:05.198) 0:00:57.001 ********* 2025-08-29 19:31:33.818115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 19:31:33.818133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 19:31:33.818158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 19:31:33.818176 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.818193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 19:31:33.818204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 19:31:33.818223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 19:31:33.818233 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.818243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 19:31:33.818259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 19:31:33.818270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 19:31:33.818280 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.818290 | orchestrator | 2025-08-29 19:31:33.818321 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-08-29 19:31:33.818331 | orchestrator | Friday 29 August 2025 19:26:11 +0000 (0:00:00.937) 0:00:57.939 ********* 2025-08-29 19:31:33.818345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 19:31:33.818356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 19:31:33.818373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 19:31:33.818383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 19:31:33.818393 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.818452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 19:31:33.818465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 19:31:33.818475 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.818485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 19:31:33.818500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 19:31:33.818517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 19:31:33.818528 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.818539 | orchestrator | 2025-08-29 19:31:33.818550 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-08-29 19:31:33.818561 | orchestrator | Friday 29 August 2025 19:26:13 +0000 (0:00:01.791) 0:00:59.731 ********* 2025-08-29 19:31:33.818572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 19:31:33.818591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 19:31:33.818603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 19:31:33.818614 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.818625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 19:31:33.818697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 19:31:33.818723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 19:31:33.818735 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.818746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 19:31:33.818761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 19:31:33.818790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 19:31:33.818809 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.818827 | orchestrator | 2025-08-29 19:31:33.818844 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-08-29 19:31:33.818864 | orchestrator | Friday 29 August 2025 19:26:15 +0000 (0:00:01.804) 0:01:01.535 ********* 2025-08-29 19:31:33.818885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 19:31:33.818913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 19:31:33.818956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 19:31:33.818969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 19:31:33.818980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 19:31:33.818992 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.819020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 19:31:33.819032 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.819051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 19:31:33.819064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 19:31:33.819086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 19:31:33.819098 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.819109 | orchestrator | 2025-08-29 19:31:33.819120 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-08-29 19:31:33.819131 | orchestrator | Friday 29 August 2025 19:26:16 +0000 (0:00:01.403) 0:01:02.939 ********* 2025-08-29 19:31:33.819142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 19:31:33.819154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 19:31:33.819165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 19:31:33.819176 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.819193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 19:31:33.819205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 19:31:33.819222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 19:31:33.819234 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.819249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 19:31:33.819261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 19:31:33.819272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 19:31:33.819284 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.819295 | orchestrator | 2025-08-29 19:31:33.819306 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-08-29 19:31:33.819316 | orchestrator | Friday 29 August 2025 19:26:17 +0000 (0:00:00.744) 0:01:03.683 ********* 2025-08-29 19:31:33.819327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 19:31:33.819347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 19:31:33.819364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 19:31:33.819375 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.819391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 19:31:33.819403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 19:31:33.819415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 19:31:33.819426 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.819437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 19:31:33.819560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 19:31:33.819579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 19:31:33.819590 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.819601 | orchestrator | 2025-08-29 19:31:33.819612 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-08-29 19:31:33.819623 | orchestrator | Friday 29 August 2025 19:26:18 +0000 (0:00:00.920) 0:01:04.604 ********* 2025-08-29 19:31:33.819634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 19:31:33.819650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 19:31:33.819662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 19:31:33.819674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 19:31:33.819685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 19:31:33.819696 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.819720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 19:31:33.819732 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.819744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 19:31:33.819759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 19:31:33.819771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 19:31:33.819782 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.819793 | orchestrator | 2025-08-29 19:31:33.819804 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-08-29 19:31:33.819815 | orchestrator | Friday 29 August 2025 19:26:18 +0000 (0:00:00.538) 0:01:05.142 ********* 2025-08-29 19:31:33.819826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 19:31:33.819838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 19:31:33.819855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 19:31:33.819866 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.819884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 19:31:33.819896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 19:31:33.819907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 19:31:33.819918 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.819948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 19:31:33.819960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 19:31:33.819971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 19:31:33.819989 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.820001 | orchestrator | 2025-08-29 19:31:33.820012 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-08-29 19:31:33.820023 | orchestrator | Friday 29 August 2025 19:26:19 +0000 (0:00:00.736) 0:01:05.878 ********* 2025-08-29 19:31:33.820034 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-08-29 19:31:33.820045 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-08-29 19:31:33.820062 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-08-29 19:31:33.820073 | orchestrator | 2025-08-29 19:31:33.820084 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-08-29 19:31:33.820095 | orchestrator | Friday 29 August 2025 19:26:21 +0000 (0:00:01.641) 0:01:07.520 ********* 2025-08-29 19:31:33.820106 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-08-29 19:31:33.820117 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-08-29 19:31:33.820128 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-08-29 19:31:33.820138 | orchestrator | 2025-08-29 19:31:33.820149 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-08-29 19:31:33.820160 | orchestrator | Friday 29 August 2025 19:26:22 +0000 (0:00:01.393) 0:01:08.914 ********* 2025-08-29 19:31:33.820171 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 19:31:33.820182 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 19:31:33.820193 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 19:31:33.820210 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 19:31:33.820229 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 19:31:33.820253 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.820276 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.820325 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 19:31:33.820345 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.820363 | orchestrator | 2025-08-29 19:31:33.820388 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-08-29 19:31:33.820408 | orchestrator | Friday 29 August 2025 19:26:23 +0000 (0:00:01.304) 0:01:10.219 ********* 2025-08-29 19:31:33.820428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-08-29 19:31:33.820448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-08-29 19:31:33.820482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-08-29 19:31:33.820514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 19:31:33.820535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 19:31:33.820548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 19:31:33.820565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 19:31:33.820576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 19:31:33.820594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 19:31:33.820606 | orchestrator | 2025-08-29 19:31:33.820616 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-08-29 19:31:33.820627 | orchestrator | Friday 29 August 2025 19:26:26 +0000 (0:00:02.871) 0:01:13.090 ********* 2025-08-29 19:31:33.820638 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:31:33.820649 | orchestrator | 2025-08-29 19:31:33.820660 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-08-29 19:31:33.820670 | orchestrator | Friday 29 August 2025 19:26:27 +0000 (0:00:00.816) 0:01:13.907 ********* 2025-08-29 19:31:33.820682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-08-29 19:31:33.820701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 19:31:33.820714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.820730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.820742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-08-29 19:31:33.820759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 19:31:33.820771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.820789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-08-29 19:31:33.820801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.820816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 19:31:33.820828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.820845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.820856 | orchestrator | 2025-08-29 19:31:33.820868 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-08-29 19:31:33.820879 | orchestrator | Friday 29 August 2025 19:26:32 +0000 (0:00:04.852) 0:01:18.760 ********* 2025-08-29 19:31:33.820890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-08-29 19:31:33.820908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 19:31:33.820919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.820959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.820978 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.820994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-08-29 19:31:33.821006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 19:31:33.821017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.821028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.821039 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.821057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-08-29 19:31:33.821069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 19:31:33.821091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.821102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.821114 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.821125 | orchestrator | 2025-08-29 19:31:33.821136 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-08-29 19:31:33.821146 | orchestrator | Friday 29 August 2025 19:26:33 +0000 (0:00:01.204) 0:01:19.964 ********* 2025-08-29 19:31:33.821158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-08-29 19:31:33.821170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-08-29 19:31:33.821181 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.821192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-08-29 19:31:33.821203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-08-29 19:31:33.821214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-08-29 19:31:33.821225 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.821236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-08-29 19:31:33.821247 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.821258 | orchestrator | 2025-08-29 19:31:33.821275 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-08-29 19:31:33.821286 | orchestrator | Friday 29 August 2025 19:26:34 +0000 (0:00:01.076) 0:01:21.041 ********* 2025-08-29 19:31:33.821297 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:31:33.821308 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:31:33.821318 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:31:33.821329 | orchestrator | 2025-08-29 19:31:33.821340 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-08-29 19:31:33.821351 | orchestrator | Friday 29 August 2025 19:26:35 +0000 (0:00:01.212) 0:01:22.253 ********* 2025-08-29 19:31:33.821362 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:31:33.821379 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:31:33.821390 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:31:33.821401 | orchestrator | 2025-08-29 19:31:33.821411 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-08-29 19:31:33.821422 | orchestrator | Friday 29 August 2025 19:26:38 +0000 (0:00:02.182) 0:01:24.436 ********* 2025-08-29 19:31:33.821433 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:31:33.821443 | orchestrator | 2025-08-29 19:31:33.821454 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-08-29 19:31:33.821465 | orchestrator | Friday 29 August 2025 19:26:38 +0000 (0:00:00.882) 0:01:25.319 ********* 2025-08-29 19:31:33.821481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 19:31:33.821493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.821505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.821517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 19:31:33.821535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.821556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.821572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 19:31:33.821584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.821595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.821606 | orchestrator | 2025-08-29 19:31:33.821617 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-08-29 19:31:33.821628 | orchestrator | Friday 29 August 2025 19:26:43 +0000 (0:00:04.332) 0:01:29.651 ********* 2025-08-29 19:31:33.821646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 19:31:33.821664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.821680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.821691 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.821703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 19:31:33.821714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.821725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.821742 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.821759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 19:31:33.821771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.821786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.821798 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.821809 | orchestrator | 2025-08-29 19:31:33.821820 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-08-29 19:31:33.821831 | orchestrator | Friday 29 August 2025 19:26:43 +0000 (0:00:00.646) 0:01:30.298 ********* 2025-08-29 19:31:33.821842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 19:31:33.821854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 19:31:33.821865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 19:31:33.821876 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.821888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 19:31:33.821899 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.821910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 19:31:33.821921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 19:31:33.821960 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.821972 | orchestrator | 2025-08-29 19:31:33.821983 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-08-29 19:31:33.822002 | orchestrator | Friday 29 August 2025 19:26:44 +0000 (0:00:01.047) 0:01:31.345 ********* 2025-08-29 19:31:33.822072 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:31:33.822093 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:31:33.822112 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:31:33.822132 | orchestrator | 2025-08-29 19:31:33.822150 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-08-29 19:31:33.822169 | orchestrator | Friday 29 August 2025 19:26:46 +0000 (0:00:01.462) 0:01:32.807 ********* 2025-08-29 19:31:33.822181 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:31:33.822192 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:31:33.822326 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:31:33.822340 | orchestrator | 2025-08-29 19:31:33.822361 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-08-29 19:31:33.822372 | orchestrator | Friday 29 August 2025 19:26:48 +0000 (0:00:02.225) 0:01:35.032 ********* 2025-08-29 19:31:33.822412 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.822423 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.822434 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.822445 | orchestrator | 2025-08-29 19:31:33.822456 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-08-29 19:31:33.822467 | orchestrator | Friday 29 August 2025 19:26:48 +0000 (0:00:00.315) 0:01:35.348 ********* 2025-08-29 19:31:33.822477 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:31:33.822488 | orchestrator | 2025-08-29 19:31:33.822499 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-08-29 19:31:33.822510 | orchestrator | Friday 29 August 2025 19:26:49 +0000 (0:00:00.882) 0:01:36.231 ********* 2025-08-29 19:31:33.822522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-08-29 19:31:33.822542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-08-29 19:31:33.822554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-08-29 19:31:33.822574 | orchestrator | 2025-08-29 19:31:33.822586 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-08-29 19:31:33.822599 | orchestrator | Friday 29 August 2025 19:26:52 +0000 (0:00:02.841) 0:01:39.072 ********* 2025-08-29 19:31:33.822628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-08-29 19:31:33.822659 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.822680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-08-29 19:31:33.822700 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.822729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-08-29 19:31:33.822750 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.822769 | orchestrator | 2025-08-29 19:31:33.822787 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-08-29 19:31:33.822805 | orchestrator | Friday 29 August 2025 19:26:54 +0000 (0:00:01.520) 0:01:40.593 ********* 2025-08-29 19:31:33.822824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 19:31:33.822857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 19:31:33.822879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 19:31:33.822900 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.822919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 19:31:33.822962 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.822991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 19:31:33.823011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 19:31:33.823027 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.823038 | orchestrator | 2025-08-29 19:31:33.823049 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-08-29 19:31:33.823060 | orchestrator | Friday 29 August 2025 19:26:56 +0000 (0:00:02.059) 0:01:42.652 ********* 2025-08-29 19:31:33.823071 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.823082 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.823092 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.823103 | orchestrator | 2025-08-29 19:31:33.823114 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-08-29 19:31:33.823124 | orchestrator | Friday 29 August 2025 19:26:56 +0000 (0:00:00.629) 0:01:43.281 ********* 2025-08-29 19:31:33.823136 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.823146 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.823157 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.823168 | orchestrator | 2025-08-29 19:31:33.823178 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-08-29 19:31:33.823189 | orchestrator | Friday 29 August 2025 19:26:57 +0000 (0:00:01.062) 0:01:44.343 ********* 2025-08-29 19:31:33.823200 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:31:33.823218 | orchestrator | 2025-08-29 19:31:33.823230 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-08-29 19:31:33.823246 | orchestrator | Friday 29 August 2025 19:26:58 +0000 (0:00:00.681) 0:01:45.025 ********* 2025-08-29 19:31:33.823258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 19:31:33.823270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.823282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.823300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.823313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 19:31:33.823335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.823347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.823358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.823376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 19:31:33.823387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.823407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.823425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.823436 | orchestrator | 2025-08-29 19:31:33.823447 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-08-29 19:31:33.823458 | orchestrator | Friday 29 August 2025 19:27:03 +0000 (0:00:04.465) 0:01:49.490 ********* 2025-08-29 19:31:33.823470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 19:31:33.823481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.823500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.823524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 19:31:33.823536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.823547 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.823559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.823570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.823588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.823599 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.823617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 19:31:33.823632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.823644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.823655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.823666 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.823677 | orchestrator | 2025-08-29 19:31:33.823688 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-08-29 19:31:33.823699 | orchestrator | Friday 29 August 2025 19:27:04 +0000 (0:00:00.951) 0:01:50.442 ********* 2025-08-29 19:31:33.823710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 19:31:33.823727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 19:31:33.823739 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.823750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 19:31:33.823767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 19:31:33.823778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 19:31:33.823789 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.823801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 19:31:33.823812 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.823823 | orchestrator | 2025-08-29 19:31:33.823833 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-08-29 19:31:33.823844 | orchestrator | Friday 29 August 2025 19:27:04 +0000 (0:00:00.863) 0:01:51.305 ********* 2025-08-29 19:31:33.823855 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:31:33.823866 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:31:33.823876 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:31:33.823887 | orchestrator | 2025-08-29 19:31:33.823897 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-08-29 19:31:33.823913 | orchestrator | Friday 29 August 2025 19:27:06 +0000 (0:00:01.303) 0:01:52.609 ********* 2025-08-29 19:31:33.823943 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:31:33.823955 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:31:33.823966 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:31:33.823977 | orchestrator | 2025-08-29 19:31:33.823987 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-08-29 19:31:33.823998 | orchestrator | Friday 29 August 2025 19:27:08 +0000 (0:00:02.070) 0:01:54.680 ********* 2025-08-29 19:31:33.824008 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.824019 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.824030 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.824040 | orchestrator | 2025-08-29 19:31:33.824051 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-08-29 19:31:33.824062 | orchestrator | Friday 29 August 2025 19:27:08 +0000 (0:00:00.514) 0:01:55.194 ********* 2025-08-29 19:31:33.824072 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.824083 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.824093 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.824104 | orchestrator | 2025-08-29 19:31:33.824115 | orchestrator | TASK [include_role : designate] ************************************************ 2025-08-29 19:31:33.824125 | orchestrator | Friday 29 August 2025 19:27:09 +0000 (0:00:00.363) 0:01:55.558 ********* 2025-08-29 19:31:33.824136 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:31:33.824146 | orchestrator | 2025-08-29 19:31:33.824157 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-08-29 19:31:33.824167 | orchestrator | Friday 29 August 2025 19:27:10 +0000 (0:00:00.995) 0:01:56.553 ********* 2025-08-29 19:31:33.824178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 19:31:33.824203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 19:31:33.824215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.824227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.824243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.824255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.824266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.824277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 19:31:33.824300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 19:31:33.824312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.824328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.824340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.824351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.824362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.824384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 19:31:33.824396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 19:31:33.824408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.824419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.824448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.824460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.824477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.824488 | orchestrator | 2025-08-29 19:31:33.824499 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-08-29 19:31:33.824510 | orchestrator | Friday 29 August 2025 19:27:14 +0000 (0:00:04.700) 0:02:01.254 ********* 2025-08-29 19:31:33.824528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 19:31:33.824549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 19:31:33.824561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 19:31:33.824572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 19:31:33.824589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.824600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.824618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.824630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.824645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.824657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 19:31:33.824673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.824685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.824702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 19:31:33.824713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.824729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.824741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.824758 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.824769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.824781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.824792 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.824803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.824820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.824832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.824848 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.824867 | orchestrator | 2025-08-29 19:31:33.824887 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-08-29 19:31:33.824905 | orchestrator | Friday 29 August 2025 19:27:15 +0000 (0:00:00.850) 0:02:02.106 ********* 2025-08-29 19:31:33.824945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-08-29 19:31:33.824974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-08-29 19:31:33.824995 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.825021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-08-29 19:31:33.825041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-08-29 19:31:33.825059 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.825077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-08-29 19:31:33.825095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-08-29 19:31:33.825113 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.825131 | orchestrator | 2025-08-29 19:31:33.825150 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-08-29 19:31:33.825168 | orchestrator | Friday 29 August 2025 19:27:16 +0000 (0:00:01.015) 0:02:03.121 ********* 2025-08-29 19:31:33.825186 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:31:33.825205 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:31:33.825223 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:31:33.825240 | orchestrator | 2025-08-29 19:31:33.825260 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-08-29 19:31:33.825277 | orchestrator | Friday 29 August 2025 19:27:18 +0000 (0:00:01.331) 0:02:04.452 ********* 2025-08-29 19:31:33.825295 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:31:33.825314 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:31:33.825332 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:31:33.825350 | orchestrator | 2025-08-29 19:31:33.825368 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-08-29 19:31:33.825387 | orchestrator | Friday 29 August 2025 19:27:20 +0000 (0:00:02.056) 0:02:06.508 ********* 2025-08-29 19:31:33.825405 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.825424 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.825444 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.825462 | orchestrator | 2025-08-29 19:31:33.825478 | orchestrator | TASK [include_role : glance] *************************************************** 2025-08-29 19:31:33.825489 | orchestrator | Friday 29 August 2025 19:27:20 +0000 (0:00:00.419) 0:02:06.928 ********* 2025-08-29 19:31:33.825500 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:31:33.825511 | orchestrator | 2025-08-29 19:31:33.825521 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-08-29 19:31:33.825532 | orchestrator | Friday 29 August 2025 19:27:21 +0000 (0:00:00.747) 0:02:07.676 ********* 2025-08-29 19:31:33.825556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 19:31:33.825587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 19:31:33.825621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 19:31:33.825645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 19:31:33.825666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 19:31:33.825684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 19:31:33.825702 | orchestrator | 2025-08-29 19:31:33.825714 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-08-29 19:31:33.825725 | orchestrator | Friday 29 August 2025 19:27:24 +0000 (0:00:03.732) 0:02:11.408 ********* 2025-08-29 19:31:33.825743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 19:31:33.825760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 19:31:33.825782 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.825795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 19:31:33.825815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 19:31:33.825837 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.825849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 19:31:33.825869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 19:31:33.825888 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.825899 | orchestrator | 2025-08-29 19:31:33.825910 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-08-29 19:31:33.825921 | orchestrator | Friday 29 August 2025 19:27:28 +0000 (0:00:03.117) 0:02:14.525 ********* 2025-08-29 19:31:33.825958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 19:31:33.825971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 19:31:33.825982 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.825993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 19:31:33.826005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 19:31:33.826150 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.826203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 19:31:33.826245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 19:31:33.826303 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.826313 | orchestrator | 2025-08-29 19:31:33.826323 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-08-29 19:31:33.826333 | orchestrator | Friday 29 August 2025 19:27:31 +0000 (0:00:03.049) 0:02:17.575 ********* 2025-08-29 19:31:33.826342 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:31:33.826352 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:31:33.826362 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:31:33.826371 | orchestrator | 2025-08-29 19:31:33.826381 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-08-29 19:31:33.826390 | orchestrator | Friday 29 August 2025 19:27:32 +0000 (0:00:01.277) 0:02:18.853 ********* 2025-08-29 19:31:33.826399 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:31:33.826409 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:31:33.826418 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:31:33.826428 | orchestrator | 2025-08-29 19:31:33.826438 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-08-29 19:31:33.826447 | orchestrator | Friday 29 August 2025 19:27:34 +0000 (0:00:01.902) 0:02:20.755 ********* 2025-08-29 19:31:33.826457 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.826466 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.826476 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.826485 | orchestrator | 2025-08-29 19:31:33.826495 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-08-29 19:31:33.826504 | orchestrator | Friday 29 August 2025 19:27:34 +0000 (0:00:00.488) 0:02:21.243 ********* 2025-08-29 19:31:33.826514 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:31:33.826524 | orchestrator | 2025-08-29 19:31:33.826539 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-08-29 19:31:33.826548 | orchestrator | Friday 29 August 2025 19:27:35 +0000 (0:00:00.844) 0:02:22.088 ********* 2025-08-29 19:31:33.826559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 19:31:33.826570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 19:31:33.826581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 19:31:33.826596 | orchestrator | 2025-08-29 19:31:33.826606 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-08-29 19:31:33.826616 | orchestrator | Friday 29 August 2025 19:27:38 +0000 (0:00:03.172) 0:02:25.260 ********* 2025-08-29 19:31:33.826632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 19:31:33.826643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 19:31:33.826653 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.826663 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.826677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 19:31:33.826687 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.826697 | orchestrator | 2025-08-29 19:31:33.826707 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-08-29 19:31:33.826716 | orchestrator | Friday 29 August 2025 19:27:39 +0000 (0:00:00.645) 0:02:25.906 ********* 2025-08-29 19:31:33.826726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-08-29 19:31:33.826736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-08-29 19:31:33.826747 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.826756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-08-29 19:31:33.826766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-08-29 19:31:33.826779 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.826787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-08-29 19:31:33.826795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-08-29 19:31:33.826803 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.826811 | orchestrator | 2025-08-29 19:31:33.826819 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-08-29 19:31:33.826826 | orchestrator | Friday 29 August 2025 19:27:40 +0000 (0:00:00.730) 0:02:26.636 ********* 2025-08-29 19:31:33.826834 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:31:33.826842 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:31:33.826850 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:31:33.826858 | orchestrator | 2025-08-29 19:31:33.826866 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-08-29 19:31:33.826874 | orchestrator | Friday 29 August 2025 19:27:41 +0000 (0:00:01.324) 0:02:27.961 ********* 2025-08-29 19:31:33.826882 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:31:33.826890 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:31:33.826898 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:31:33.826905 | orchestrator | 2025-08-29 19:31:33.826913 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-08-29 19:31:33.826921 | orchestrator | Friday 29 August 2025 19:27:43 +0000 (0:00:02.004) 0:02:29.965 ********* 2025-08-29 19:31:33.826955 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.826966 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.826979 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.826987 | orchestrator | 2025-08-29 19:31:33.826995 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-08-29 19:31:33.827003 | orchestrator | Friday 29 August 2025 19:27:43 +0000 (0:00:00.415) 0:02:30.381 ********* 2025-08-29 19:31:33.827011 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:31:33.827018 | orchestrator | 2025-08-29 19:31:33.827026 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-08-29 19:31:33.827034 | orchestrator | Friday 29 August 2025 19:27:44 +0000 (0:00:00.836) 0:02:31.217 ********* 2025-08-29 19:31:33.827048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 19:31:33.827068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 19:31:33.827082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 19:31:33.827096 | orchestrator | 2025-08-29 19:31:33.827104 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-08-29 19:31:33.827112 | orchestrator | Friday 29 August 2025 19:27:47 +0000 (0:00:03.141) 0:02:34.359 ********* 2025-08-29 19:31:33.827126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 19:31:33.827135 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.827148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 19:31:33.827164 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.827178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 19:31:33.827187 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.827195 | orchestrator | 2025-08-29 19:31:33.827203 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-08-29 19:31:33.827219 | orchestrator | Friday 29 August 2025 19:27:48 +0000 (0:00:00.924) 0:02:35.283 ********* 2025-08-29 19:31:33.827228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 19:31:33.827237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 19:31:33.827245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 19:31:33.827254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 19:31:33.827262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-08-29 19:31:33.827270 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.827279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 19:31:33.827287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 19:31:33.827295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 19:31:33.827307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 19:31:33.827316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 19:31:33.827324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 19:31:33.827332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 19:31:33.827347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 19:31:33.827355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-08-29 19:31:33.827363 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.827371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-08-29 19:31:33.827379 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.827387 | orchestrator | 2025-08-29 19:31:33.827395 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-08-29 19:31:33.827402 | orchestrator | Friday 29 August 2025 19:27:49 +0000 (0:00:00.873) 0:02:36.157 ********* 2025-08-29 19:31:33.827410 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:31:33.827418 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:31:33.827426 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:31:33.827434 | orchestrator | 2025-08-29 19:31:33.827441 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-08-29 19:31:33.827449 | orchestrator | Friday 29 August 2025 19:27:50 +0000 (0:00:01.188) 0:02:37.345 ********* 2025-08-29 19:31:33.827457 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:31:33.827465 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:31:33.827472 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:31:33.827480 | orchestrator | 2025-08-29 19:31:33.827488 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-08-29 19:31:33.827495 | orchestrator | Friday 29 August 2025 19:27:52 +0000 (0:00:01.902) 0:02:39.247 ********* 2025-08-29 19:31:33.827503 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.827511 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.827519 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.827526 | orchestrator | 2025-08-29 19:31:33.827534 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-08-29 19:31:33.827542 | orchestrator | Friday 29 August 2025 19:27:53 +0000 (0:00:00.271) 0:02:39.519 ********* 2025-08-29 19:31:33.827549 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.827557 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.827565 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.827572 | orchestrator | 2025-08-29 19:31:33.827580 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-08-29 19:31:33.827588 | orchestrator | Friday 29 August 2025 19:27:53 +0000 (0:00:00.412) 0:02:39.932 ********* 2025-08-29 19:31:33.827596 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:31:33.827603 | orchestrator | 2025-08-29 19:31:33.827611 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-08-29 19:31:33.827619 | orchestrator | Friday 29 August 2025 19:27:54 +0000 (0:00:00.858) 0:02:40.791 ********* 2025-08-29 19:31:33.827632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 19:31:33.827646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 19:31:33.827658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 19:31:33.827667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 19:31:33.827675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 19:31:33.827683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 19:31:33.827701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 19:31:33.827710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 19:31:33.827722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 19:31:33.827730 | orchestrator | 2025-08-29 19:31:33.827738 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-08-29 19:31:33.827746 | orchestrator | Friday 29 August 2025 19:27:58 +0000 (0:00:04.117) 0:02:44.909 ********* 2025-08-29 19:31:33.827754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 19:31:33.827763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 19:31:33.827780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 19:31:33.827789 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.827797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 19:31:33.827809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 19:31:33.827818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 19:31:33.827826 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.827834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 19:31:33.827851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 19:31:33.827860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 19:31:33.827868 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.827876 | orchestrator | 2025-08-29 19:31:33.827884 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-08-29 19:31:33.827892 | orchestrator | Friday 29 August 2025 19:27:59 +0000 (0:00:00.931) 0:02:45.840 ********* 2025-08-29 19:31:33.827900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 19:31:33.827912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 19:31:33.827920 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.827973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 19:31:33.827983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 19:31:33.827991 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.827999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 19:31:33.828007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 19:31:33.828015 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.828023 | orchestrator | 2025-08-29 19:31:33.828031 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-08-29 19:31:33.828039 | orchestrator | Friday 29 August 2025 19:28:00 +0000 (0:00:00.808) 0:02:46.648 ********* 2025-08-29 19:31:33.828056 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:31:33.828064 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:31:33.828072 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:31:33.828080 | orchestrator | 2025-08-29 19:31:33.828088 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-08-29 19:31:33.828096 | orchestrator | Friday 29 August 2025 19:28:01 +0000 (0:00:01.404) 0:02:48.053 ********* 2025-08-29 19:31:33.828104 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:31:33.828111 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:31:33.828119 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:31:33.828127 | orchestrator | 2025-08-29 19:31:33.828135 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-08-29 19:31:33.828142 | orchestrator | Friday 29 August 2025 19:28:03 +0000 (0:00:02.002) 0:02:50.055 ********* 2025-08-29 19:31:33.828150 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.828158 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.828166 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.828174 | orchestrator | 2025-08-29 19:31:33.828181 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-08-29 19:31:33.828189 | orchestrator | Friday 29 August 2025 19:28:04 +0000 (0:00:00.427) 0:02:50.482 ********* 2025-08-29 19:31:33.828197 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:31:33.828205 | orchestrator | 2025-08-29 19:31:33.828212 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-08-29 19:31:33.828220 | orchestrator | Friday 29 August 2025 19:28:04 +0000 (0:00:00.925) 0:02:51.408 ********* 2025-08-29 19:31:33.828234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 19:31:33.828247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 19:31:33.828256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.828272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.828280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 19:31:33.828294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.828302 | orchestrator | 2025-08-29 19:31:33.828310 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-08-29 19:31:33.828318 | orchestrator | Friday 29 August 2025 19:28:08 +0000 (0:00:03.671) 0:02:55.080 ********* 2025-08-29 19:31:33.828340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 19:31:33.828349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.828361 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.828370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 19:31:33.828382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.828390 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.828399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 19:31:33.828410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.828418 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.828430 | orchestrator | 2025-08-29 19:31:33.828446 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-08-29 19:31:33.828455 | orchestrator | Friday 29 August 2025 19:28:10 +0000 (0:00:01.356) 0:02:56.437 ********* 2025-08-29 19:31:33.828463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-08-29 19:31:33.828471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-08-29 19:31:33.828479 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.828487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-08-29 19:31:33.828495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-08-29 19:31:33.828503 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.828511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-08-29 19:31:33.828519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-08-29 19:31:33.828526 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.828534 | orchestrator | 2025-08-29 19:31:33.828542 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-08-29 19:31:33.828550 | orchestrator | Friday 29 August 2025 19:28:11 +0000 (0:00:01.014) 0:02:57.451 ********* 2025-08-29 19:31:33.828557 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:31:33.828565 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:31:33.828573 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:31:33.828581 | orchestrator | 2025-08-29 19:31:33.828588 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-08-29 19:31:33.828596 | orchestrator | Friday 29 August 2025 19:28:12 +0000 (0:00:01.312) 0:02:58.763 ********* 2025-08-29 19:31:33.828604 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:31:33.828611 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:31:33.828619 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:31:33.828627 | orchestrator | 2025-08-29 19:31:33.828635 | orchestrator | TASK [include_role : manila] *************************************************** 2025-08-29 19:31:33.828642 | orchestrator | Friday 29 August 2025 19:28:14 +0000 (0:00:02.073) 0:03:00.837 ********* 2025-08-29 19:31:33.828654 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:31:33.828662 | orchestrator | 2025-08-29 19:31:33.828670 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-08-29 19:31:33.828678 | orchestrator | Friday 29 August 2025 19:28:15 +0000 (0:00:01.269) 0:03:02.107 ********* 2025-08-29 19:31:33.828686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-08-29 19:31:33.828702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.828711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.828719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.828728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-08-29 19:31:33.828740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.828749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.828761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.828773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-08-29 19:31:33.828781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.828789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.828938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.828953 | orchestrator | 2025-08-29 19:31:33.828961 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-08-29 19:31:33.828970 | orchestrator | Friday 29 August 2025 19:28:19 +0000 (0:00:03.712) 0:03:05.819 ********* 2025-08-29 19:31:33.828978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-08-29 19:31:33.828997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.829005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.829014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.829022 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.829031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-08-29 19:31:33.829044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.829057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.829069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.829078 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.829086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-08-29 19:31:33.829095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.829103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.829115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.829128 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.829136 | orchestrator | 2025-08-29 19:31:33.829144 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-08-29 19:31:33.829152 | orchestrator | Friday 29 August 2025 19:28:20 +0000 (0:00:00.707) 0:03:06.526 ********* 2025-08-29 19:31:33.829160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-08-29 19:31:33.829168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-08-29 19:31:33.829176 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.829184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-08-29 19:31:33.829192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-08-29 19:31:33.829200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-08-29 19:31:33.829211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-08-29 19:31:33.829219 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.829227 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.829235 | orchestrator | 2025-08-29 19:31:33.829243 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-08-29 19:31:33.829251 | orchestrator | Friday 29 August 2025 19:28:21 +0000 (0:00:01.564) 0:03:08.091 ********* 2025-08-29 19:31:33.829259 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:31:33.829267 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:31:33.829275 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:31:33.829283 | orchestrator | 2025-08-29 19:31:33.829291 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-08-29 19:31:33.829299 | orchestrator | Friday 29 August 2025 19:28:23 +0000 (0:00:01.352) 0:03:09.444 ********* 2025-08-29 19:31:33.829306 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:31:33.829314 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:31:33.829322 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:31:33.829330 | orchestrator | 2025-08-29 19:31:33.829338 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-08-29 19:31:33.829346 | orchestrator | Friday 29 August 2025 19:28:25 +0000 (0:00:02.124) 0:03:11.568 ********* 2025-08-29 19:31:33.829354 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:31:33.829362 | orchestrator | 2025-08-29 19:31:33.829370 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-08-29 19:31:33.829378 | orchestrator | Friday 29 August 2025 19:28:26 +0000 (0:00:01.443) 0:03:13.011 ********* 2025-08-29 19:31:33.829385 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 19:31:33.829393 | orchestrator | 2025-08-29 19:31:33.829401 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-08-29 19:31:33.829409 | orchestrator | Friday 29 August 2025 19:28:29 +0000 (0:00:02.786) 0:03:15.798 ********* 2025-08-29 19:31:33.829423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 19:31:33.829440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 19:31:33.829448 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.829460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 19:31:33.829474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 19:31:33.829482 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.829496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 19:31:33.829508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 19:31:33.829516 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.829525 | orchestrator | 2025-08-29 19:31:33.829553 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-08-29 19:31:33.829561 | orchestrator | Friday 29 August 2025 19:28:32 +0000 (0:00:02.806) 0:03:18.605 ********* 2025-08-29 19:31:33.829571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 19:31:33.829591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 19:31:33.829601 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.829615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 19:31:33.829625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 19:31:33.829639 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.829653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 19:31:33.829663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 19:31:33.829672 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.829681 | orchestrator | 2025-08-29 19:31:33.829690 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-08-29 19:31:33.829703 | orchestrator | Friday 29 August 2025 19:28:34 +0000 (0:00:02.394) 0:03:20.999 ********* 2025-08-29 19:31:33.829711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 19:31:33.829720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 19:31:33.829732 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.829740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 19:31:33.829748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 19:31:33.829756 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.829769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 19:31:33.829777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 19:31:33.829785 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.829793 | orchestrator | 2025-08-29 19:31:33.829801 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-08-29 19:31:33.829809 | orchestrator | Friday 29 August 2025 19:28:37 +0000 (0:00:02.817) 0:03:23.817 ********* 2025-08-29 19:31:33.829817 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:31:33.829825 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:31:33.829833 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:31:33.829840 | orchestrator | 2025-08-29 19:31:33.829849 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-08-29 19:31:33.829860 | orchestrator | Friday 29 August 2025 19:28:39 +0000 (0:00:01.768) 0:03:25.586 ********* 2025-08-29 19:31:33.829868 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.829876 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.829884 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.829891 | orchestrator | 2025-08-29 19:31:33.829899 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-08-29 19:31:33.829912 | orchestrator | Friday 29 August 2025 19:28:40 +0000 (0:00:01.222) 0:03:26.809 ********* 2025-08-29 19:31:33.829919 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.829941 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.829949 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.829957 | orchestrator | 2025-08-29 19:31:33.829965 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-08-29 19:31:33.829973 | orchestrator | Friday 29 August 2025 19:28:40 +0000 (0:00:00.263) 0:03:27.073 ********* 2025-08-29 19:31:33.829980 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:31:33.829988 | orchestrator | 2025-08-29 19:31:33.829996 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-08-29 19:31:33.830004 | orchestrator | Friday 29 August 2025 19:28:41 +0000 (0:00:01.305) 0:03:28.378 ********* 2025-08-29 19:31:33.830012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-08-29 19:31:33.830054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-08-29 19:31:33.830069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-08-29 19:31:33.830078 | orchestrator | 2025-08-29 19:31:33.830086 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-08-29 19:31:33.830093 | orchestrator | Friday 29 August 2025 19:28:43 +0000 (0:00:01.518) 0:03:29.896 ********* 2025-08-29 19:31:33.830101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-08-29 19:31:33.830114 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.830126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-08-29 19:31:33.830135 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.830143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-08-29 19:31:33.830151 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.830159 | orchestrator | 2025-08-29 19:31:33.830167 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-08-29 19:31:33.830175 | orchestrator | Friday 29 August 2025 19:28:43 +0000 (0:00:00.392) 0:03:30.289 ********* 2025-08-29 19:31:33.830183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-08-29 19:31:33.830192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-08-29 19:31:33.830200 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.830207 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.830219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-08-29 19:31:33.830227 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.830235 | orchestrator | 2025-08-29 19:31:33.830243 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-08-29 19:31:33.830251 | orchestrator | Friday 29 August 2025 19:28:44 +0000 (0:00:00.639) 0:03:30.928 ********* 2025-08-29 19:31:33.830259 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.830267 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.830274 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.830282 | orchestrator | 2025-08-29 19:31:33.830290 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-08-29 19:31:33.830298 | orchestrator | Friday 29 August 2025 19:28:45 +0000 (0:00:00.810) 0:03:31.738 ********* 2025-08-29 19:31:33.830305 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.830318 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.830326 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.830334 | orchestrator | 2025-08-29 19:31:33.830342 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-08-29 19:31:33.830349 | orchestrator | Friday 29 August 2025 19:28:46 +0000 (0:00:01.370) 0:03:33.109 ********* 2025-08-29 19:31:33.830357 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.830365 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.830373 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.830380 | orchestrator | 2025-08-29 19:31:33.830388 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-08-29 19:31:33.830396 | orchestrator | Friday 29 August 2025 19:28:47 +0000 (0:00:00.323) 0:03:33.433 ********* 2025-08-29 19:31:33.830404 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:31:33.830411 | orchestrator | 2025-08-29 19:31:33.830419 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-08-29 19:31:33.830427 | orchestrator | Friday 29 August 2025 19:28:48 +0000 (0:00:01.473) 0:03:34.906 ********* 2025-08-29 19:31:33.830438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 19:31:33.830447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.830456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.830473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.830489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 19:31:33.830501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.830510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 19:31:33.830518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 19:31:33.830527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 19:31:33.830539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.830552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.830564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 19:31:33.830572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.830581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.830589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.830605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 19:31:33.830614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 19:31:33.830626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 19:31:33.830634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.830643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.830651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 19:31:33.830664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 19:31:33.830677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 19:31:33.830685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 19:31:33.830697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.830706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 19:31:33.830714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.830722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 19:31:33.830739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 19:31:33.830748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.830761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 19:31:33.830769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 19:31:33.830778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 19:31:33.830794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.830803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.830814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.830823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 19:31:33.830831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.830840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 19:31:33.830852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 19:31:33.830865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.830873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 19:31:33.830885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.830893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 19:31:33.830902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 19:31:33.830914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.830947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 19:31:33.830957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 19:31:33.830965 | orchestrator | 2025-08-29 19:31:33.830973 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-08-29 19:31:33.830981 | orchestrator | Friday 29 August 2025 19:28:52 +0000 (0:00:04.225) 0:03:39.131 ********* 2025-08-29 19:31:33.830995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 19:31:33.831004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.831016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.831029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.831038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 19:31:33.831050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.831059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 19:31:33.831067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 19:31:33.831080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 19:31:33.831092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.831101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.831112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.831121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 19:31:33.831133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 19:31:33.831142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.831155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.831163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.831175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 19:31:33.831184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.831196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 19:31:33.831205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.831217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.831226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 19:31:33.831234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 19:31:33.831243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 19:31:33.831255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 19:31:33.831263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.831276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.831338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.831354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 19:31:33.831366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 19:31:33.831380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 19:31:33.831388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 19:31:33.831397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 19:31:33.831405 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.831420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.831428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.831440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 19:31:33.831453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 19:31:33.831462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 19:31:33.831470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.831483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 19:31:33.831491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.831500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 19:31:33.831511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 19:31:33.831524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.831532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 19:31:33.831540 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.831553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 19:31:33.831562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 19:31:33.831570 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.831578 | orchestrator | 2025-08-29 19:31:33.831590 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-08-29 19:31:33.831598 | orchestrator | Friday 29 August 2025 19:28:54 +0000 (0:00:01.447) 0:03:40.579 ********* 2025-08-29 19:31:33.831609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-08-29 19:31:33.831617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-08-29 19:31:33.831625 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.831633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-08-29 19:31:33.831642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-08-29 19:31:33.831650 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.831658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-08-29 19:31:33.831665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-08-29 19:31:33.831673 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.831681 | orchestrator | 2025-08-29 19:31:33.831689 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-08-29 19:31:33.831697 | orchestrator | Friday 29 August 2025 19:28:56 +0000 (0:00:02.174) 0:03:42.754 ********* 2025-08-29 19:31:33.831705 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:31:33.831712 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:31:33.831720 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:31:33.831728 | orchestrator | 2025-08-29 19:31:33.831736 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-08-29 19:31:33.831744 | orchestrator | Friday 29 August 2025 19:28:57 +0000 (0:00:01.262) 0:03:44.016 ********* 2025-08-29 19:31:33.831751 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:31:33.831759 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:31:33.831767 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:31:33.831775 | orchestrator | 2025-08-29 19:31:33.831782 | orchestrator | TASK [include_role : placement] ************************************************ 2025-08-29 19:31:33.831790 | orchestrator | Friday 29 August 2025 19:28:59 +0000 (0:00:02.018) 0:03:46.034 ********* 2025-08-29 19:31:33.831798 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:31:33.831806 | orchestrator | 2025-08-29 19:31:33.831814 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-08-29 19:31:33.831821 | orchestrator | Friday 29 August 2025 19:29:00 +0000 (0:00:01.266) 0:03:47.301 ********* 2025-08-29 19:31:33.831954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 19:31:33.831974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 19:31:33.831987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 19:31:33.831995 | orchestrator | 2025-08-29 19:31:33.832003 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-08-29 19:31:33.832011 | orchestrator | Friday 29 August 2025 19:29:04 +0000 (0:00:03.524) 0:03:50.826 ********* 2025-08-29 19:31:33.832020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 19:31:33.832028 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.832059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 19:31:33.832074 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.832083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 19:31:33.832091 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.832099 | orchestrator | 2025-08-29 19:31:33.832107 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-08-29 19:31:33.832115 | orchestrator | Friday 29 August 2025 19:29:04 +0000 (0:00:00.500) 0:03:51.326 ********* 2025-08-29 19:31:33.832123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 19:31:33.832135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 19:31:33.832143 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.832151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 19:31:33.832159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 19:31:33.832168 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.832176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 19:31:33.832184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 19:31:33.832192 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.832200 | orchestrator | 2025-08-29 19:31:33.832208 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-08-29 19:31:33.832215 | orchestrator | Friday 29 August 2025 19:29:05 +0000 (0:00:00.765) 0:03:52.092 ********* 2025-08-29 19:31:33.832223 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:31:33.832231 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:31:33.832239 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:31:33.832247 | orchestrator | 2025-08-29 19:31:33.832255 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-08-29 19:31:33.832263 | orchestrator | Friday 29 August 2025 19:29:06 +0000 (0:00:01.255) 0:03:53.348 ********* 2025-08-29 19:31:33.832271 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:31:33.832278 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:31:33.832286 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:31:33.832294 | orchestrator | 2025-08-29 19:31:33.832302 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-08-29 19:31:33.832314 | orchestrator | Friday 29 August 2025 19:29:09 +0000 (0:00:02.159) 0:03:55.508 ********* 2025-08-29 19:31:33.832322 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:31:33.832330 | orchestrator | 2025-08-29 19:31:33.832338 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-08-29 19:31:33.832346 | orchestrator | Friday 29 August 2025 19:29:10 +0000 (0:00:01.496) 0:03:57.005 ********* 2025-08-29 19:31:33.832378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 19:31:33.832395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 19:31:33.832405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.832413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.832429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.832458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.832472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 19:31:33.832481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.832489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.832514 | orchestrator | 2025-08-29 19:31:33.832523 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-08-29 19:31:33.832536 | orchestrator | Friday 29 August 2025 19:29:14 +0000 (0:00:04.120) 0:04:01.125 ********* 2025-08-29 19:31:33.832567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 19:31:33.832577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.832586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.832594 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.832607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 19:31:33.832616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.832629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.832637 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.832667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 19:31:33.832677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.832689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.832697 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.832705 | orchestrator | 2025-08-29 19:31:33.832713 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-08-29 19:31:33.832721 | orchestrator | Friday 29 August 2025 19:29:15 +0000 (0:00:01.268) 0:04:02.394 ********* 2025-08-29 19:31:33.832729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 19:31:33.832743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 19:31:33.832751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 19:31:33.832759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 19:31:33.832768 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.832776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 19:31:33.832784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 19:31:33.832792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 19:31:33.832800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 19:31:33.832829 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.832838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 19:31:33.832846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 19:31:33.832854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 19:31:33.832863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 19:31:33.832871 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.832879 | orchestrator | 2025-08-29 19:31:33.832887 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-08-29 19:31:33.832895 | orchestrator | Friday 29 August 2025 19:29:16 +0000 (0:00:00.965) 0:04:03.360 ********* 2025-08-29 19:31:33.832903 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:31:33.832911 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:31:33.832919 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:31:33.832942 | orchestrator | 2025-08-29 19:31:33.832951 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-08-29 19:31:33.832959 | orchestrator | Friday 29 August 2025 19:29:18 +0000 (0:00:01.449) 0:04:04.810 ********* 2025-08-29 19:31:33.832967 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:31:33.832975 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:31:33.832982 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:31:33.832990 | orchestrator | 2025-08-29 19:31:33.833001 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-08-29 19:31:33.833015 | orchestrator | Friday 29 August 2025 19:29:20 +0000 (0:00:02.146) 0:04:06.956 ********* 2025-08-29 19:31:33.833023 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:31:33.833031 | orchestrator | 2025-08-29 19:31:33.833039 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-08-29 19:31:33.833047 | orchestrator | Friday 29 August 2025 19:29:22 +0000 (0:00:01.611) 0:04:08.567 ********* 2025-08-29 19:31:33.833055 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-08-29 19:31:33.833063 | orchestrator | 2025-08-29 19:31:33.833071 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-08-29 19:31:33.833079 | orchestrator | Friday 29 August 2025 19:29:22 +0000 (0:00:00.792) 0:04:09.360 ********* 2025-08-29 19:31:33.833087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-08-29 19:31:33.833096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-08-29 19:31:33.833105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-08-29 19:31:33.833113 | orchestrator | 2025-08-29 19:31:33.833121 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-08-29 19:31:33.833129 | orchestrator | Friday 29 August 2025 19:29:27 +0000 (0:00:04.732) 0:04:14.092 ********* 2025-08-29 19:31:33.833162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 19:31:33.833172 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.833180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 19:31:33.833188 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.833197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 19:31:33.833209 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.833217 | orchestrator | 2025-08-29 19:31:33.833228 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-08-29 19:31:33.833236 | orchestrator | Friday 29 August 2025 19:29:28 +0000 (0:00:01.092) 0:04:15.185 ********* 2025-08-29 19:31:33.833244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 19:31:33.833253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 19:31:33.833261 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.833269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 19:31:33.833282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 19:31:33.833290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 19:31:33.833298 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.833307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 19:31:33.833315 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.833323 | orchestrator | 2025-08-29 19:31:33.833331 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-08-29 19:31:33.833339 | orchestrator | Friday 29 August 2025 19:29:30 +0000 (0:00:01.650) 0:04:16.835 ********* 2025-08-29 19:31:33.833347 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:31:33.833354 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:31:33.833362 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:31:33.833370 | orchestrator | 2025-08-29 19:31:33.833378 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-08-29 19:31:33.833386 | orchestrator | Friday 29 August 2025 19:29:32 +0000 (0:00:02.577) 0:04:19.412 ********* 2025-08-29 19:31:33.833394 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:31:33.833402 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:31:33.833409 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:31:33.833417 | orchestrator | 2025-08-29 19:31:33.833425 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-08-29 19:31:33.833433 | orchestrator | Friday 29 August 2025 19:29:36 +0000 (0:00:03.095) 0:04:22.508 ********* 2025-08-29 19:31:33.833462 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-08-29 19:31:33.833472 | orchestrator | 2025-08-29 19:31:33.833480 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-08-29 19:31:33.833492 | orchestrator | Friday 29 August 2025 19:29:37 +0000 (0:00:01.416) 0:04:23.924 ********* 2025-08-29 19:31:33.833501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 19:31:33.833509 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.833517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 19:31:33.833526 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.833537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 19:31:33.833546 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.833553 | orchestrator | 2025-08-29 19:31:33.833562 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-08-29 19:31:33.833570 | orchestrator | Friday 29 August 2025 19:29:38 +0000 (0:00:01.264) 0:04:25.189 ********* 2025-08-29 19:31:33.833578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 19:31:33.833586 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.833594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 19:31:33.833602 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.833610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 19:31:33.833622 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.833630 | orchestrator | 2025-08-29 19:31:33.833638 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-08-29 19:31:33.833646 | orchestrator | Friday 29 August 2025 19:29:40 +0000 (0:00:01.394) 0:04:26.583 ********* 2025-08-29 19:31:33.833654 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.833662 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.833669 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.833677 | orchestrator | 2025-08-29 19:31:33.833706 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-08-29 19:31:33.833716 | orchestrator | Friday 29 August 2025 19:29:42 +0000 (0:00:01.880) 0:04:28.464 ********* 2025-08-29 19:31:33.833724 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:31:33.833732 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:31:33.833740 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:31:33.833748 | orchestrator | 2025-08-29 19:31:33.833756 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-08-29 19:31:33.833764 | orchestrator | Friday 29 August 2025 19:29:44 +0000 (0:00:02.384) 0:04:30.849 ********* 2025-08-29 19:31:33.833771 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:31:33.833779 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:31:33.833787 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:31:33.833795 | orchestrator | 2025-08-29 19:31:33.833803 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-08-29 19:31:33.833811 | orchestrator | Friday 29 August 2025 19:29:47 +0000 (0:00:02.901) 0:04:33.750 ********* 2025-08-29 19:31:33.833819 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-08-29 19:31:33.833827 | orchestrator | 2025-08-29 19:31:33.833835 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-08-29 19:31:33.833842 | orchestrator | Friday 29 August 2025 19:29:48 +0000 (0:00:00.785) 0:04:34.536 ********* 2025-08-29 19:31:33.833857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 19:31:33.833865 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.833874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 19:31:33.833882 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.833890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 19:31:33.833898 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.833906 | orchestrator | 2025-08-29 19:31:33.833914 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-08-29 19:31:33.833962 | orchestrator | Friday 29 August 2025 19:29:49 +0000 (0:00:01.096) 0:04:35.633 ********* 2025-08-29 19:31:33.833972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 19:31:33.833980 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.833988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 19:31:33.833996 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.834060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 19:31:33.834072 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.834080 | orchestrator | 2025-08-29 19:31:33.834088 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-08-29 19:31:33.834096 | orchestrator | Friday 29 August 2025 19:29:50 +0000 (0:00:01.354) 0:04:36.988 ********* 2025-08-29 19:31:33.834104 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.834111 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.834119 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.834127 | orchestrator | 2025-08-29 19:31:33.834135 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-08-29 19:31:33.834143 | orchestrator | Friday 29 August 2025 19:29:52 +0000 (0:00:01.585) 0:04:38.573 ********* 2025-08-29 19:31:33.834150 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:31:33.834158 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:31:33.834166 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:31:33.834174 | orchestrator | 2025-08-29 19:31:33.834181 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-08-29 19:31:33.834189 | orchestrator | Friday 29 August 2025 19:29:54 +0000 (0:00:02.339) 0:04:40.912 ********* 2025-08-29 19:31:33.834197 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:31:33.834205 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:31:33.834212 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:31:33.834220 | orchestrator | 2025-08-29 19:31:33.834228 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-08-29 19:31:33.834236 | orchestrator | Friday 29 August 2025 19:29:57 +0000 (0:00:03.038) 0:04:43.951 ********* 2025-08-29 19:31:33.834248 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:31:33.834256 | orchestrator | 2025-08-29 19:31:33.834264 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-08-29 19:31:33.834271 | orchestrator | Friday 29 August 2025 19:29:59 +0000 (0:00:01.595) 0:04:45.546 ********* 2025-08-29 19:31:33.834280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 19:31:33.834295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 19:31:33.834326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 19:31:33.834336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 19:31:33.834345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 19:31:33.834357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 19:31:33.834370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 19:31:33.834379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 19:31:33.834387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 19:31:33.834416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 19:31:33.834426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 19:31:33.834437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.834450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.834459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 19:31:33.834467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.834475 | orchestrator | 2025-08-29 19:31:33.834483 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-08-29 19:31:33.834491 | orchestrator | Friday 29 August 2025 19:30:02 +0000 (0:00:03.310) 0:04:48.856 ********* 2025-08-29 19:31:33.834520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 19:31:33.834530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 19:31:33.834538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 19:31:33.834554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 19:31:33.834563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.834571 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.834579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 19:31:33.834608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 19:31:33.834618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 19:31:33.834626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 19:31:33.834643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.834651 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.834660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 19:31:33.834668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 19:31:33.834677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 19:31:33.834706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 19:31:33.834716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 19:31:33.834729 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.834737 | orchestrator | 2025-08-29 19:31:33.834746 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-08-29 19:31:33.834754 | orchestrator | Friday 29 August 2025 19:30:03 +0000 (0:00:00.661) 0:04:49.517 ********* 2025-08-29 19:31:33.834762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 19:31:33.834773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 19:31:33.834782 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.834790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 19:31:33.834798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 19:31:33.834806 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.834814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 19:31:33.834822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 19:31:33.834830 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.834838 | orchestrator | 2025-08-29 19:31:33.834845 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-08-29 19:31:33.834853 | orchestrator | Friday 29 August 2025 19:30:04 +0000 (0:00:01.231) 0:04:50.749 ********* 2025-08-29 19:31:33.834861 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:31:33.834869 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:31:33.834877 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:31:33.834884 | orchestrator | 2025-08-29 19:31:33.834892 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-08-29 19:31:33.834900 | orchestrator | Friday 29 August 2025 19:30:05 +0000 (0:00:01.439) 0:04:52.189 ********* 2025-08-29 19:31:33.834908 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:31:33.834916 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:31:33.834923 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:31:33.834945 | orchestrator | 2025-08-29 19:31:33.834953 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-08-29 19:31:33.834961 | orchestrator | Friday 29 August 2025 19:30:07 +0000 (0:00:02.082) 0:04:54.272 ********* 2025-08-29 19:31:33.834969 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:31:33.834976 | orchestrator | 2025-08-29 19:31:33.834984 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-08-29 19:31:33.834992 | orchestrator | Friday 29 August 2025 19:30:09 +0000 (0:00:01.356) 0:04:55.628 ********* 2025-08-29 19:31:33.835024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 19:31:33.835039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 19:31:33.835054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 19:31:33.835063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 19:31:33.835095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 19:31:33.835110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 19:31:33.835119 | orchestrator | 2025-08-29 19:31:33.835127 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-08-29 19:31:33.835135 | orchestrator | Friday 29 August 2025 19:30:14 +0000 (0:00:05.454) 0:05:01.082 ********* 2025-08-29 19:31:33.835147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 19:31:33.835156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 19:31:33.835165 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.835174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 19:31:33.835210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 19:31:33.835220 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.835232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 19:31:33.835241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 19:31:33.835250 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.835258 | orchestrator | 2025-08-29 19:31:33.835266 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-08-29 19:31:33.835274 | orchestrator | Friday 29 August 2025 19:30:15 +0000 (0:00:00.666) 0:05:01.749 ********* 2025-08-29 19:31:33.835282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-08-29 19:31:33.835295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 19:31:33.835303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 19:31:33.835311 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.835319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-08-29 19:31:33.835350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 19:31:33.835359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 19:31:33.835368 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.835376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-08-29 19:31:33.835384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 19:31:33.835392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 19:31:33.835400 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.835408 | orchestrator | 2025-08-29 19:31:33.835416 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-08-29 19:31:33.835424 | orchestrator | Friday 29 August 2025 19:30:16 +0000 (0:00:00.955) 0:05:02.704 ********* 2025-08-29 19:31:33.835432 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.835440 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.835451 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.835460 | orchestrator | 2025-08-29 19:31:33.835468 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-08-29 19:31:33.835475 | orchestrator | Friday 29 August 2025 19:30:17 +0000 (0:00:00.891) 0:05:03.596 ********* 2025-08-29 19:31:33.835483 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.835491 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.835499 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.835507 | orchestrator | 2025-08-29 19:31:33.835514 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-08-29 19:31:33.835522 | orchestrator | Friday 29 August 2025 19:30:18 +0000 (0:00:01.327) 0:05:04.923 ********* 2025-08-29 19:31:33.835530 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:31:33.835538 | orchestrator | 2025-08-29 19:31:33.835546 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-08-29 19:31:33.835554 | orchestrator | Friday 29 August 2025 19:30:19 +0000 (0:00:01.389) 0:05:06.312 ********* 2025-08-29 19:31:33.835562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 19:31:33.835575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 19:31:33.835584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:31:33.835616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:31:33.835625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 19:31:33.835638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 19:31:33.835647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 19:31:33.835655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:31:33.835669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:31:33.835677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 19:31:33.835709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 19:31:33.835718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 19:31:33.835727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:31:33.835739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:31:33.835747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 19:31:33.835760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 19:31:33.835772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 19:31:33.835781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:31:33.835790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:31:33.835801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 19:31:33.835810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 19:31:33.835824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 19:31:33.835837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:31:33.835846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:31:33.835857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 19:31:33.835869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 19:31:33.835884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 19:31:33.835893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:31:33.835901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:31:33.835913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 19:31:33.835922 | orchestrator | 2025-08-29 19:31:33.835967 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-08-29 19:31:33.835975 | orchestrator | Friday 29 August 2025 19:30:24 +0000 (0:00:04.397) 0:05:10.710 ********* 2025-08-29 19:31:33.835984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-08-29 19:31:33.835992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 19:31:33.836006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:31:33.836014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:31:33.836023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 19:31:33.836036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-08-29 19:31:33.836086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 19:31:33.836104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:31:33.836118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:31:33.836126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 19:31:33.836134 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.836143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-08-29 19:31:33.836151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 19:31:33.836166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:31:33.836174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:31:33.836183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-08-29 19:31:33.836199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 19:31:33.836207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 19:31:33.836216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-08-29 19:31:33.836225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:31:33.836237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:31:33.836246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 19:31:33.836264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 19:31:33.836273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:31:33.836281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-08-29 19:31:33.836290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:31:33.836302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 19:31:33.836311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 19:31:33.836323 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.836334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:31:33.836343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:31:33.836351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 19:31:33.836359 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.836367 | orchestrator | 2025-08-29 19:31:33.836375 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-08-29 19:31:33.836383 | orchestrator | Friday 29 August 2025 19:30:25 +0000 (0:00:01.259) 0:05:11.969 ********* 2025-08-29 19:31:33.836391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-08-29 19:31:33.836399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-08-29 19:31:33.836408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 19:31:33.836417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 19:31:33.836425 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.836433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-08-29 19:31:33.836445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-08-29 19:31:33.836458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 19:31:33.836466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 19:31:33.836475 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.836483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-08-29 19:31:33.836491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-08-29 19:31:33.836502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 19:31:33.836510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 19:31:33.836518 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.836526 | orchestrator | 2025-08-29 19:31:33.836534 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-08-29 19:31:33.836542 | orchestrator | Friday 29 August 2025 19:30:26 +0000 (0:00:01.105) 0:05:13.075 ********* 2025-08-29 19:31:33.836550 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.836558 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.836566 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.836573 | orchestrator | 2025-08-29 19:31:33.836581 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-08-29 19:31:33.836589 | orchestrator | Friday 29 August 2025 19:30:27 +0000 (0:00:00.477) 0:05:13.552 ********* 2025-08-29 19:31:33.836597 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.836605 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.836613 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.836620 | orchestrator | 2025-08-29 19:31:33.836628 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-08-29 19:31:33.836636 | orchestrator | Friday 29 August 2025 19:30:28 +0000 (0:00:01.430) 0:05:14.982 ********* 2025-08-29 19:31:33.836644 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:31:33.836652 | orchestrator | 2025-08-29 19:31:33.836659 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-08-29 19:31:33.836667 | orchestrator | Friday 29 August 2025 19:30:30 +0000 (0:00:01.777) 0:05:16.760 ********* 2025-08-29 19:31:33.836676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 19:31:33.836694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 19:31:33.836707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 19:31:33.836716 | orchestrator | 2025-08-29 19:31:33.836724 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-08-29 19:31:33.836732 | orchestrator | Friday 29 August 2025 19:30:33 +0000 (0:00:02.692) 0:05:19.452 ********* 2025-08-29 19:31:33.836740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-08-29 19:31:33.836749 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.836757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-08-29 19:31:33.836770 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.836783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-08-29 19:31:33.836792 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.836800 | orchestrator | 2025-08-29 19:31:33.836808 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-08-29 19:31:33.836815 | orchestrator | Friday 29 August 2025 19:30:33 +0000 (0:00:00.403) 0:05:19.856 ********* 2025-08-29 19:31:33.836823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-08-29 19:31:33.836831 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.836839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-08-29 19:31:33.836847 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.836860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-08-29 19:31:33.836868 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.836876 | orchestrator | 2025-08-29 19:31:33.836884 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-08-29 19:31:33.836892 | orchestrator | Friday 29 August 2025 19:30:34 +0000 (0:00:01.000) 0:05:20.856 ********* 2025-08-29 19:31:33.836900 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.836908 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.836915 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.836923 | orchestrator | 2025-08-29 19:31:33.836944 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-08-29 19:31:33.836952 | orchestrator | Friday 29 August 2025 19:30:34 +0000 (0:00:00.446) 0:05:21.303 ********* 2025-08-29 19:31:33.836960 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.836968 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.836975 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.836983 | orchestrator | 2025-08-29 19:31:33.836991 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-08-29 19:31:33.836999 | orchestrator | Friday 29 August 2025 19:30:36 +0000 (0:00:01.371) 0:05:22.675 ********* 2025-08-29 19:31:33.837006 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:31:33.837014 | orchestrator | 2025-08-29 19:31:33.837022 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-08-29 19:31:33.837034 | orchestrator | Friday 29 August 2025 19:30:38 +0000 (0:00:01.800) 0:05:24.475 ********* 2025-08-29 19:31:33.837042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-08-29 19:31:33.837055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-08-29 19:31:33.837064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-08-29 19:31:33.837076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-08-29 19:31:33.837085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-08-29 19:31:33.837098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-08-29 19:31:33.837106 | orchestrator | 2025-08-29 19:31:33.837118 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-08-29 19:31:33.837126 | orchestrator | Friday 29 August 2025 19:30:44 +0000 (0:00:06.199) 0:05:30.675 ********* 2025-08-29 19:31:33.837134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-08-29 19:31:33.837146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-08-29 19:31:33.837154 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.837163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-08-29 19:31:33.837175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-08-29 19:31:33.837183 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.837196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-08-29 19:31:33.837208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-08-29 19:31:33.837217 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.837224 | orchestrator | 2025-08-29 19:31:33.837232 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-08-29 19:31:33.837240 | orchestrator | Friday 29 August 2025 19:30:44 +0000 (0:00:00.651) 0:05:31.326 ********* 2025-08-29 19:31:33.837253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 19:31:33.837261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 19:31:33.837269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 19:31:33.837277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 19:31:33.837285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 19:31:33.837293 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.837301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 19:31:33.837309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 19:31:33.837317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 19:31:33.837325 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.837333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 19:31:33.837345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 19:31:33.837354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 19:31:33.837362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 19:31:33.837370 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.837378 | orchestrator | 2025-08-29 19:31:33.837386 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-08-29 19:31:33.837394 | orchestrator | Friday 29 August 2025 19:30:46 +0000 (0:00:01.689) 0:05:33.016 ********* 2025-08-29 19:31:33.837401 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:31:33.837409 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:31:33.837417 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:31:33.837425 | orchestrator | 2025-08-29 19:31:33.837433 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-08-29 19:31:33.837440 | orchestrator | Friday 29 August 2025 19:30:47 +0000 (0:00:01.351) 0:05:34.367 ********* 2025-08-29 19:31:33.837448 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:31:33.837460 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:31:33.837468 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:31:33.837476 | orchestrator | 2025-08-29 19:31:33.837484 | orchestrator | TASK [include_role : swift] **************************************************** 2025-08-29 19:31:33.837492 | orchestrator | Friday 29 August 2025 19:30:50 +0000 (0:00:02.251) 0:05:36.618 ********* 2025-08-29 19:31:33.837499 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.837507 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.837515 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.837523 | orchestrator | 2025-08-29 19:31:33.837531 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-08-29 19:31:33.837542 | orchestrator | Friday 29 August 2025 19:30:50 +0000 (0:00:00.331) 0:05:36.950 ********* 2025-08-29 19:31:33.837550 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.837558 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.837565 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.837573 | orchestrator | 2025-08-29 19:31:33.837581 | orchestrator | TASK [include_role : trove] **************************************************** 2025-08-29 19:31:33.837589 | orchestrator | Friday 29 August 2025 19:30:50 +0000 (0:00:00.311) 0:05:37.261 ********* 2025-08-29 19:31:33.837597 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.837604 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.837612 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.837620 | orchestrator | 2025-08-29 19:31:33.837628 | orchestrator | TASK [include_role : venus] **************************************************** 2025-08-29 19:31:33.837636 | orchestrator | Friday 29 August 2025 19:30:51 +0000 (0:00:00.643) 0:05:37.905 ********* 2025-08-29 19:31:33.837644 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.837652 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.837659 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.837667 | orchestrator | 2025-08-29 19:31:33.837675 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-08-29 19:31:33.837683 | orchestrator | Friday 29 August 2025 19:30:51 +0000 (0:00:00.322) 0:05:38.228 ********* 2025-08-29 19:31:33.837691 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.837698 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.837706 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.837714 | orchestrator | 2025-08-29 19:31:33.837722 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-08-29 19:31:33.837730 | orchestrator | Friday 29 August 2025 19:30:52 +0000 (0:00:00.333) 0:05:38.561 ********* 2025-08-29 19:31:33.837737 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.837745 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.837753 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.837761 | orchestrator | 2025-08-29 19:31:33.837768 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-08-29 19:31:33.837776 | orchestrator | Friday 29 August 2025 19:30:52 +0000 (0:00:00.830) 0:05:39.392 ********* 2025-08-29 19:31:33.837784 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:31:33.837792 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:31:33.837799 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:31:33.837807 | orchestrator | 2025-08-29 19:31:33.837815 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-08-29 19:31:33.837823 | orchestrator | Friday 29 August 2025 19:30:53 +0000 (0:00:00.714) 0:05:40.107 ********* 2025-08-29 19:31:33.837830 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:31:33.837838 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:31:33.837846 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:31:33.837854 | orchestrator | 2025-08-29 19:31:33.837862 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-08-29 19:31:33.837869 | orchestrator | Friday 29 August 2025 19:30:54 +0000 (0:00:00.354) 0:05:40.461 ********* 2025-08-29 19:31:33.837877 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:31:33.837885 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:31:33.837893 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:31:33.837905 | orchestrator | 2025-08-29 19:31:33.837913 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-08-29 19:31:33.837921 | orchestrator | Friday 29 August 2025 19:30:54 +0000 (0:00:00.906) 0:05:41.367 ********* 2025-08-29 19:31:33.837939 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:31:33.837947 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:31:33.837955 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:31:33.837963 | orchestrator | 2025-08-29 19:31:33.837971 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-08-29 19:31:33.837979 | orchestrator | Friday 29 August 2025 19:30:56 +0000 (0:00:01.234) 0:05:42.602 ********* 2025-08-29 19:31:33.837986 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:31:33.837994 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:31:33.838006 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:31:33.838035 | orchestrator | 2025-08-29 19:31:33.838045 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-08-29 19:31:33.838053 | orchestrator | Friday 29 August 2025 19:30:57 +0000 (0:00:00.956) 0:05:43.558 ********* 2025-08-29 19:31:33.838061 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:31:33.838069 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:31:33.838077 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:31:33.838084 | orchestrator | 2025-08-29 19:31:33.838092 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-08-29 19:31:33.838100 | orchestrator | Friday 29 August 2025 19:31:05 +0000 (0:00:08.218) 0:05:51.776 ********* 2025-08-29 19:31:33.838108 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:31:33.838116 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:31:33.838124 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:31:33.838131 | orchestrator | 2025-08-29 19:31:33.838139 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-08-29 19:31:33.838147 | orchestrator | Friday 29 August 2025 19:31:06 +0000 (0:00:00.725) 0:05:52.502 ********* 2025-08-29 19:31:33.838155 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:31:33.838163 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:31:33.838171 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:31:33.838179 | orchestrator | 2025-08-29 19:31:33.838187 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-08-29 19:31:33.838195 | orchestrator | Friday 29 August 2025 19:31:14 +0000 (0:00:08.471) 0:06:00.974 ********* 2025-08-29 19:31:33.838202 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:31:33.838210 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:31:33.838218 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:31:33.838226 | orchestrator | 2025-08-29 19:31:33.838234 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-08-29 19:31:33.838241 | orchestrator | Friday 29 August 2025 19:31:18 +0000 (0:00:04.015) 0:06:04.989 ********* 2025-08-29 19:31:33.838249 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:31:33.838257 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:31:33.838265 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:31:33.838273 | orchestrator | 2025-08-29 19:31:33.838281 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-08-29 19:31:33.838292 | orchestrator | Friday 29 August 2025 19:31:28 +0000 (0:00:09.592) 0:06:14.582 ********* 2025-08-29 19:31:33.838300 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.838308 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.838316 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.838324 | orchestrator | 2025-08-29 19:31:33.838331 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-08-29 19:31:33.838339 | orchestrator | Friday 29 August 2025 19:31:28 +0000 (0:00:00.362) 0:06:14.944 ********* 2025-08-29 19:31:33.838347 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.838355 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.838363 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.838370 | orchestrator | 2025-08-29 19:31:33.838378 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-08-29 19:31:33.838391 | orchestrator | Friday 29 August 2025 19:31:28 +0000 (0:00:00.372) 0:06:15.317 ********* 2025-08-29 19:31:33.838399 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.838407 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.838414 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.838422 | orchestrator | 2025-08-29 19:31:33.838430 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-08-29 19:31:33.838438 | orchestrator | Friday 29 August 2025 19:31:29 +0000 (0:00:00.671) 0:06:15.989 ********* 2025-08-29 19:31:33.838445 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.838453 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.838461 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.838468 | orchestrator | 2025-08-29 19:31:33.838476 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-08-29 19:31:33.838484 | orchestrator | Friday 29 August 2025 19:31:29 +0000 (0:00:00.345) 0:06:16.335 ********* 2025-08-29 19:31:33.838492 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.838500 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.838507 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.838515 | orchestrator | 2025-08-29 19:31:33.838523 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-08-29 19:31:33.838531 | orchestrator | Friday 29 August 2025 19:31:30 +0000 (0:00:00.365) 0:06:16.700 ********* 2025-08-29 19:31:33.838538 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:31:33.838546 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:31:33.838554 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:31:33.838562 | orchestrator | 2025-08-29 19:31:33.838569 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-08-29 19:31:33.838577 | orchestrator | Friday 29 August 2025 19:31:30 +0000 (0:00:00.400) 0:06:17.101 ********* 2025-08-29 19:31:33.838585 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:31:33.838593 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:31:33.838601 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:31:33.838608 | orchestrator | 2025-08-29 19:31:33.838616 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-08-29 19:31:33.838624 | orchestrator | Friday 29 August 2025 19:31:31 +0000 (0:00:01.288) 0:06:18.390 ********* 2025-08-29 19:31:33.838632 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:31:33.838640 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:31:33.838647 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:31:33.838655 | orchestrator | 2025-08-29 19:31:33.838663 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:31:33.838671 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-08-29 19:31:33.838679 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-08-29 19:31:33.838687 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-08-29 19:31:33.838695 | orchestrator | 2025-08-29 19:31:33.838703 | orchestrator | 2025-08-29 19:31:33.838714 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:31:33.838722 | orchestrator | Friday 29 August 2025 19:31:32 +0000 (0:00:00.801) 0:06:19.191 ********* 2025-08-29 19:31:33.838730 | orchestrator | =============================================================================== 2025-08-29 19:31:33.838738 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.59s 2025-08-29 19:31:33.838746 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.47s 2025-08-29 19:31:33.838754 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.22s 2025-08-29 19:31:33.838761 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.20s 2025-08-29 19:31:33.838773 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 6.11s 2025-08-29 19:31:33.838781 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.45s 2025-08-29 19:31:33.838789 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 5.20s 2025-08-29 19:31:33.838797 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.85s 2025-08-29 19:31:33.838804 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.73s 2025-08-29 19:31:33.838812 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.70s 2025-08-29 19:31:33.838820 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.47s 2025-08-29 19:31:33.838828 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.40s 2025-08-29 19:31:33.838835 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.33s 2025-08-29 19:31:33.838843 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 4.23s 2025-08-29 19:31:33.838851 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.23s 2025-08-29 19:31:33.838862 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.12s 2025-08-29 19:31:33.838870 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.12s 2025-08-29 19:31:33.838877 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.02s 2025-08-29 19:31:33.838885 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 3.73s 2025-08-29 19:31:33.838893 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.71s 2025-08-29 19:31:33.838901 | orchestrator | 2025-08-29 19:31:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:31:36.857191 | orchestrator | 2025-08-29 19:31:36 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:31:36.860145 | orchestrator | 2025-08-29 19:31:36 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:31:36.861645 | orchestrator | 2025-08-29 19:31:36 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:31:36.861866 | orchestrator | 2025-08-29 19:31:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:31:40.006252 | orchestrator | 2025-08-29 19:31:39 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:31:40.006337 | orchestrator | 2025-08-29 19:31:39 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:31:40.006355 | orchestrator | 2025-08-29 19:31:39 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:31:40.006372 | orchestrator | 2025-08-29 19:31:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:31:43.036170 | orchestrator | 2025-08-29 19:31:43 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:31:43.036392 | orchestrator | 2025-08-29 19:31:43 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:31:43.037082 | orchestrator | 2025-08-29 19:31:43 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:31:43.037240 | orchestrator | 2025-08-29 19:31:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:31:46.089442 | orchestrator | 2025-08-29 19:31:46 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:31:46.089865 | orchestrator | 2025-08-29 19:31:46 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:31:46.090795 | orchestrator | 2025-08-29 19:31:46 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:31:46.090815 | orchestrator | 2025-08-29 19:31:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:31:49.133646 | orchestrator | 2025-08-29 19:31:49 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:31:49.133736 | orchestrator | 2025-08-29 19:31:49 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:31:49.133746 | orchestrator | 2025-08-29 19:31:49 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:31:49.133754 | orchestrator | 2025-08-29 19:31:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:31:52.170651 | orchestrator | 2025-08-29 19:31:52 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:31:52.171177 | orchestrator | 2025-08-29 19:31:52 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:31:52.175531 | orchestrator | 2025-08-29 19:31:52 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:31:52.175605 | orchestrator | 2025-08-29 19:31:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:31:55.209145 | orchestrator | 2025-08-29 19:31:55 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:31:55.209394 | orchestrator | 2025-08-29 19:31:55 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:31:55.210141 | orchestrator | 2025-08-29 19:31:55 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:31:55.210262 | orchestrator | 2025-08-29 19:31:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:31:58.243085 | orchestrator | 2025-08-29 19:31:58 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:31:58.243543 | orchestrator | 2025-08-29 19:31:58 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:31:58.245314 | orchestrator | 2025-08-29 19:31:58 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:31:58.245397 | orchestrator | 2025-08-29 19:31:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:32:01.298463 | orchestrator | 2025-08-29 19:32:01 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:32:01.299326 | orchestrator | 2025-08-29 19:32:01 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:32:01.300334 | orchestrator | 2025-08-29 19:32:01 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:32:01.300430 | orchestrator | 2025-08-29 19:32:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:32:04.351051 | orchestrator | 2025-08-29 19:32:04 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:32:04.353865 | orchestrator | 2025-08-29 19:32:04 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:32:04.354280 | orchestrator | 2025-08-29 19:32:04 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:32:04.354308 | orchestrator | 2025-08-29 19:32:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:32:07.379183 | orchestrator | 2025-08-29 19:32:07 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:32:07.379447 | orchestrator | 2025-08-29 19:32:07 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:32:07.380481 | orchestrator | 2025-08-29 19:32:07 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:32:07.380525 | orchestrator | 2025-08-29 19:32:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:32:10.428999 | orchestrator | 2025-08-29 19:32:10 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:32:10.433639 | orchestrator | 2025-08-29 19:32:10 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:32:10.435793 | orchestrator | 2025-08-29 19:32:10 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:32:10.435874 | orchestrator | 2025-08-29 19:32:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:32:13.474981 | orchestrator | 2025-08-29 19:32:13 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:32:13.475207 | orchestrator | 2025-08-29 19:32:13 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:32:13.476196 | orchestrator | 2025-08-29 19:32:13 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:32:13.476231 | orchestrator | 2025-08-29 19:32:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:32:16.515284 | orchestrator | 2025-08-29 19:32:16 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:32:16.515380 | orchestrator | 2025-08-29 19:32:16 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:32:16.516219 | orchestrator | 2025-08-29 19:32:16 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:32:16.516255 | orchestrator | 2025-08-29 19:32:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:32:19.559282 | orchestrator | 2025-08-29 19:32:19 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:32:19.560340 | orchestrator | 2025-08-29 19:32:19 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:32:19.562296 | orchestrator | 2025-08-29 19:32:19 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:32:19.562424 | orchestrator | 2025-08-29 19:32:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:32:22.605922 | orchestrator | 2025-08-29 19:32:22 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:32:22.607338 | orchestrator | 2025-08-29 19:32:22 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:32:22.609711 | orchestrator | 2025-08-29 19:32:22 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:32:22.609757 | orchestrator | 2025-08-29 19:32:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:32:25.661586 | orchestrator | 2025-08-29 19:32:25 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:32:25.663516 | orchestrator | 2025-08-29 19:32:25 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:32:25.671072 | orchestrator | 2025-08-29 19:32:25 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:32:25.671148 | orchestrator | 2025-08-29 19:32:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:32:28.709168 | orchestrator | 2025-08-29 19:32:28 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:32:28.710739 | orchestrator | 2025-08-29 19:32:28 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:32:28.712690 | orchestrator | 2025-08-29 19:32:28 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:32:28.713084 | orchestrator | 2025-08-29 19:32:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:32:31.763459 | orchestrator | 2025-08-29 19:32:31 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:32:31.764146 | orchestrator | 2025-08-29 19:32:31 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:32:31.765296 | orchestrator | 2025-08-29 19:32:31 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:32:31.765390 | orchestrator | 2025-08-29 19:32:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:32:34.815362 | orchestrator | 2025-08-29 19:32:34 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:32:34.816930 | orchestrator | 2025-08-29 19:32:34 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:32:34.818210 | orchestrator | 2025-08-29 19:32:34 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:32:34.818263 | orchestrator | 2025-08-29 19:32:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:32:37.853896 | orchestrator | 2025-08-29 19:32:37 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:32:37.855020 | orchestrator | 2025-08-29 19:32:37 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:32:37.856547 | orchestrator | 2025-08-29 19:32:37 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:32:37.856588 | orchestrator | 2025-08-29 19:32:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:32:40.907097 | orchestrator | 2025-08-29 19:32:40 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:32:40.909039 | orchestrator | 2025-08-29 19:32:40 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:32:40.910729 | orchestrator | 2025-08-29 19:32:40 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:32:40.911206 | orchestrator | 2025-08-29 19:32:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:32:43.958387 | orchestrator | 2025-08-29 19:32:43 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:32:43.962158 | orchestrator | 2025-08-29 19:32:43 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:32:43.965632 | orchestrator | 2025-08-29 19:32:43 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:32:43.966440 | orchestrator | 2025-08-29 19:32:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:32:47.018139 | orchestrator | 2025-08-29 19:32:47 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:32:47.019333 | orchestrator | 2025-08-29 19:32:47 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:32:47.021462 | orchestrator | 2025-08-29 19:32:47 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:32:47.021525 | orchestrator | 2025-08-29 19:32:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:32:50.072968 | orchestrator | 2025-08-29 19:32:50 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:32:50.074739 | orchestrator | 2025-08-29 19:32:50 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:32:50.076959 | orchestrator | 2025-08-29 19:32:50 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:32:50.077027 | orchestrator | 2025-08-29 19:32:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:32:53.119274 | orchestrator | 2025-08-29 19:32:53 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:32:53.122362 | orchestrator | 2025-08-29 19:32:53 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:32:53.123452 | orchestrator | 2025-08-29 19:32:53 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:32:53.123798 | orchestrator | 2025-08-29 19:32:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:32:56.171767 | orchestrator | 2025-08-29 19:32:56 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:32:56.173691 | orchestrator | 2025-08-29 19:32:56 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:32:56.175882 | orchestrator | 2025-08-29 19:32:56 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:32:56.175893 | orchestrator | 2025-08-29 19:32:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:32:59.222688 | orchestrator | 2025-08-29 19:32:59 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:32:59.224401 | orchestrator | 2025-08-29 19:32:59 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:32:59.226315 | orchestrator | 2025-08-29 19:32:59 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:32:59.226366 | orchestrator | 2025-08-29 19:32:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:33:02.264115 | orchestrator | 2025-08-29 19:33:02 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:33:02.264405 | orchestrator | 2025-08-29 19:33:02 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:33:02.265322 | orchestrator | 2025-08-29 19:33:02 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:33:02.265365 | orchestrator | 2025-08-29 19:33:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:33:05.333390 | orchestrator | 2025-08-29 19:33:05 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:33:05.333996 | orchestrator | 2025-08-29 19:33:05 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:33:05.335538 | orchestrator | 2025-08-29 19:33:05 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:33:05.335591 | orchestrator | 2025-08-29 19:33:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:33:08.382009 | orchestrator | 2025-08-29 19:33:08 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:33:08.384395 | orchestrator | 2025-08-29 19:33:08 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:33:08.387444 | orchestrator | 2025-08-29 19:33:08 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:33:08.387523 | orchestrator | 2025-08-29 19:33:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:33:11.441549 | orchestrator | 2025-08-29 19:33:11 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:33:11.443098 | orchestrator | 2025-08-29 19:33:11 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:33:11.445399 | orchestrator | 2025-08-29 19:33:11 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:33:11.445489 | orchestrator | 2025-08-29 19:33:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:33:14.488915 | orchestrator | 2025-08-29 19:33:14 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:33:14.491505 | orchestrator | 2025-08-29 19:33:14 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:33:14.493228 | orchestrator | 2025-08-29 19:33:14 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:33:14.493311 | orchestrator | 2025-08-29 19:33:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:33:17.530124 | orchestrator | 2025-08-29 19:33:17 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:33:17.532391 | orchestrator | 2025-08-29 19:33:17 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:33:17.534661 | orchestrator | 2025-08-29 19:33:17 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:33:17.534737 | orchestrator | 2025-08-29 19:33:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:33:20.573155 | orchestrator | 2025-08-29 19:33:20 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:33:20.574511 | orchestrator | 2025-08-29 19:33:20 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:33:20.576354 | orchestrator | 2025-08-29 19:33:20 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:33:20.576436 | orchestrator | 2025-08-29 19:33:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:33:23.618607 | orchestrator | 2025-08-29 19:33:23 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:33:23.620135 | orchestrator | 2025-08-29 19:33:23 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:33:23.621632 | orchestrator | 2025-08-29 19:33:23 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:33:23.622073 | orchestrator | 2025-08-29 19:33:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:33:26.667473 | orchestrator | 2025-08-29 19:33:26 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:33:26.671104 | orchestrator | 2025-08-29 19:33:26 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:33:26.673501 | orchestrator | 2025-08-29 19:33:26 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:33:26.673555 | orchestrator | 2025-08-29 19:33:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:33:29.718664 | orchestrator | 2025-08-29 19:33:29 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:33:29.720443 | orchestrator | 2025-08-29 19:33:29 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:33:29.722279 | orchestrator | 2025-08-29 19:33:29 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:33:29.722327 | orchestrator | 2025-08-29 19:33:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:33:32.763298 | orchestrator | 2025-08-29 19:33:32 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:33:32.764981 | orchestrator | 2025-08-29 19:33:32 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:33:32.766868 | orchestrator | 2025-08-29 19:33:32 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:33:32.766999 | orchestrator | 2025-08-29 19:33:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:33:35.810773 | orchestrator | 2025-08-29 19:33:35 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:33:35.812041 | orchestrator | 2025-08-29 19:33:35 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:33:35.813678 | orchestrator | 2025-08-29 19:33:35 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:33:35.813911 | orchestrator | 2025-08-29 19:33:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:33:38.866112 | orchestrator | 2025-08-29 19:33:38 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:33:38.867357 | orchestrator | 2025-08-29 19:33:38 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:33:38.868431 | orchestrator | 2025-08-29 19:33:38 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:33:38.868479 | orchestrator | 2025-08-29 19:33:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:33:41.916647 | orchestrator | 2025-08-29 19:33:41 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:33:41.917350 | orchestrator | 2025-08-29 19:33:41 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:33:41.919544 | orchestrator | 2025-08-29 19:33:41 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:33:41.919574 | orchestrator | 2025-08-29 19:33:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:33:44.966275 | orchestrator | 2025-08-29 19:33:44 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:33:44.968629 | orchestrator | 2025-08-29 19:33:44 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:33:44.970076 | orchestrator | 2025-08-29 19:33:44 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:33:44.970130 | orchestrator | 2025-08-29 19:33:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:33:48.016758 | orchestrator | 2025-08-29 19:33:48 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:33:48.020193 | orchestrator | 2025-08-29 19:33:48 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:33:48.022212 | orchestrator | 2025-08-29 19:33:48 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state STARTED 2025-08-29 19:33:48.022282 | orchestrator | 2025-08-29 19:33:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:33:51.077915 | orchestrator | 2025-08-29 19:33:51 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:33:51.079938 | orchestrator | 2025-08-29 19:33:51 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:33:51.085929 | orchestrator | 2025-08-29 19:33:51 | INFO  | Task 2081fc47-e262-49ed-a529-6c3c1c791b31 is in state SUCCESS 2025-08-29 19:33:51.088272 | orchestrator | 2025-08-29 19:33:51.088314 | orchestrator | 2025-08-29 19:33:51.088327 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-08-29 19:33:51.088340 | orchestrator | 2025-08-29 19:33:51.088351 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-08-29 19:33:51.088364 | orchestrator | Friday 29 August 2025 19:22:21 +0000 (0:00:00.795) 0:00:00.795 ********* 2025-08-29 19:33:51.088377 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:33:51.088429 | orchestrator | 2025-08-29 19:33:51.088443 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-08-29 19:33:51.088454 | orchestrator | Friday 29 August 2025 19:22:22 +0000 (0:00:01.268) 0:00:02.064 ********* 2025-08-29 19:33:51.088465 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.088478 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.088489 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.088500 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.088511 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.088548 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.088560 | orchestrator | 2025-08-29 19:33:51.088571 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-08-29 19:33:51.088659 | orchestrator | Friday 29 August 2025 19:22:24 +0000 (0:00:01.654) 0:00:03.718 ********* 2025-08-29 19:33:51.088671 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.088683 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.088693 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.088704 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.088715 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.088726 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.088736 | orchestrator | 2025-08-29 19:33:51.088918 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-08-29 19:33:51.088934 | orchestrator | Friday 29 August 2025 19:22:25 +0000 (0:00:00.805) 0:00:04.524 ********* 2025-08-29 19:33:51.088947 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.088959 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.088971 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.088983 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.088998 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.089018 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.089037 | orchestrator | 2025-08-29 19:33:51.089081 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-08-29 19:33:51.089100 | orchestrator | Friday 29 August 2025 19:22:26 +0000 (0:00:00.771) 0:00:05.295 ********* 2025-08-29 19:33:51.089118 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.089136 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.089266 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.089288 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.089368 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.089381 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.089392 | orchestrator | 2025-08-29 19:33:51.089403 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-08-29 19:33:51.089414 | orchestrator | Friday 29 August 2025 19:22:27 +0000 (0:00:00.870) 0:00:06.166 ********* 2025-08-29 19:33:51.089425 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.089486 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.089497 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.089508 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.089519 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.089529 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.089540 | orchestrator | 2025-08-29 19:33:51.089551 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-08-29 19:33:51.089563 | orchestrator | Friday 29 August 2025 19:22:27 +0000 (0:00:00.712) 0:00:06.879 ********* 2025-08-29 19:33:51.089598 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.089609 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.089620 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.089631 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.089641 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.089652 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.089662 | orchestrator | 2025-08-29 19:33:51.089823 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-08-29 19:33:51.089837 | orchestrator | Friday 29 August 2025 19:22:29 +0000 (0:00:01.611) 0:00:08.490 ********* 2025-08-29 19:33:51.089848 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.089860 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.089871 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.089882 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.089899 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.089918 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.089973 | orchestrator | 2025-08-29 19:33:51.089992 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-08-29 19:33:51.090004 | orchestrator | Friday 29 August 2025 19:22:30 +0000 (0:00:01.125) 0:00:09.616 ********* 2025-08-29 19:33:51.090117 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.090175 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.090187 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.090198 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.090209 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.090219 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.090230 | orchestrator | 2025-08-29 19:33:51.090241 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-08-29 19:33:51.090252 | orchestrator | Friday 29 August 2025 19:22:31 +0000 (0:00:00.766) 0:00:10.383 ********* 2025-08-29 19:33:51.090362 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 19:33:51.090374 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 19:33:51.090401 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 19:33:51.090412 | orchestrator | 2025-08-29 19:33:51.090423 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-08-29 19:33:51.090434 | orchestrator | Friday 29 August 2025 19:22:31 +0000 (0:00:00.694) 0:00:11.077 ********* 2025-08-29 19:33:51.090468 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.090487 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.090504 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.090523 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.090542 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.090558 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.090577 | orchestrator | 2025-08-29 19:33:51.090618 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-08-29 19:33:51.090638 | orchestrator | Friday 29 August 2025 19:22:32 +0000 (0:00:00.997) 0:00:12.075 ********* 2025-08-29 19:33:51.090658 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 19:33:51.090672 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 19:33:51.090683 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 19:33:51.090693 | orchestrator | 2025-08-29 19:33:51.090704 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-08-29 19:33:51.090715 | orchestrator | Friday 29 August 2025 19:22:36 +0000 (0:00:03.835) 0:00:15.910 ********* 2025-08-29 19:33:51.090726 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-08-29 19:33:51.090738 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-08-29 19:33:51.090749 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-08-29 19:33:51.090760 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.090853 | orchestrator | 2025-08-29 19:33:51.090868 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-08-29 19:33:51.090880 | orchestrator | Friday 29 August 2025 19:22:37 +0000 (0:00:00.867) 0:00:16.777 ********* 2025-08-29 19:33:51.090894 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.090946 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.090960 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.090972 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.090983 | orchestrator | 2025-08-29 19:33:51.090994 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-08-29 19:33:51.091043 | orchestrator | Friday 29 August 2025 19:22:38 +0000 (0:00:00.674) 0:00:17.451 ********* 2025-08-29 19:33:51.091071 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.091170 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.091232 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.091244 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.091255 | orchestrator | 2025-08-29 19:33:51.091266 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-08-29 19:33:51.091278 | orchestrator | Friday 29 August 2025 19:22:38 +0000 (0:00:00.164) 0:00:17.616 ********* 2025-08-29 19:33:51.091309 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-08-29 19:22:34.115029', 'end': '2025-08-29 19:22:34.374391', 'delta': '0:00:00.259362', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.091324 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-08-29 19:22:35.338371', 'end': '2025-08-29 19:22:35.625696', 'delta': '0:00:00.287325', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.091336 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-08-29 19:22:36.338682', 'end': '2025-08-29 19:22:36.597010', 'delta': '0:00:00.258328', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.091348 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.091359 | orchestrator | 2025-08-29 19:33:51.091377 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-08-29 19:33:51.091389 | orchestrator | Friday 29 August 2025 19:22:38 +0000 (0:00:00.324) 0:00:17.940 ********* 2025-08-29 19:33:51.091399 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.091410 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.091421 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.091432 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.091443 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.091453 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.091464 | orchestrator | 2025-08-29 19:33:51.091475 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-08-29 19:33:51.091486 | orchestrator | Friday 29 August 2025 19:22:40 +0000 (0:00:02.015) 0:00:19.956 ********* 2025-08-29 19:33:51.091496 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 19:33:51.091507 | orchestrator | 2025-08-29 19:33:51.091518 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-08-29 19:33:51.091529 | orchestrator | Friday 29 August 2025 19:22:41 +0000 (0:00:00.786) 0:00:20.743 ********* 2025-08-29 19:33:51.091540 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.091551 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.091562 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.091573 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.091583 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.091594 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.091605 | orchestrator | 2025-08-29 19:33:51.091616 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-08-29 19:33:51.091626 | orchestrator | Friday 29 August 2025 19:22:43 +0000 (0:00:01.785) 0:00:22.528 ********* 2025-08-29 19:33:51.091637 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.091648 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.091659 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.091670 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.091680 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.091691 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.091702 | orchestrator | 2025-08-29 19:33:51.091712 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-08-29 19:33:51.091723 | orchestrator | Friday 29 August 2025 19:22:44 +0000 (0:00:01.285) 0:00:23.813 ********* 2025-08-29 19:33:51.091734 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.091745 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.091756 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.091767 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.091796 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.091807 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.091818 | orchestrator | 2025-08-29 19:33:51.091829 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-08-29 19:33:51.091840 | orchestrator | Friday 29 August 2025 19:22:45 +0000 (0:00:00.810) 0:00:24.624 ********* 2025-08-29 19:33:51.091851 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.091862 | orchestrator | 2025-08-29 19:33:51.091873 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-08-29 19:33:51.091884 | orchestrator | Friday 29 August 2025 19:22:45 +0000 (0:00:00.224) 0:00:24.849 ********* 2025-08-29 19:33:51.091895 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.091906 | orchestrator | 2025-08-29 19:33:51.091922 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-08-29 19:33:51.091933 | orchestrator | Friday 29 August 2025 19:22:45 +0000 (0:00:00.262) 0:00:25.111 ********* 2025-08-29 19:33:51.091944 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.091972 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.091983 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.091994 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.092006 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.092023 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.092034 | orchestrator | 2025-08-29 19:33:51.092063 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-08-29 19:33:51.092075 | orchestrator | Friday 29 August 2025 19:22:46 +0000 (0:00:00.582) 0:00:25.694 ********* 2025-08-29 19:33:51.092086 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.092097 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.092108 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.092118 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.092129 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.092140 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.092150 | orchestrator | 2025-08-29 19:33:51.092161 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-08-29 19:33:51.092172 | orchestrator | Friday 29 August 2025 19:22:47 +0000 (0:00:00.755) 0:00:26.449 ********* 2025-08-29 19:33:51.092183 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.092194 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.092205 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.092215 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.092226 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.092236 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.092247 | orchestrator | 2025-08-29 19:33:51.092258 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-08-29 19:33:51.092269 | orchestrator | Friday 29 August 2025 19:22:48 +0000 (0:00:00.703) 0:00:27.152 ********* 2025-08-29 19:33:51.092280 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.092291 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.092302 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.092312 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.092323 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.092334 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.092345 | orchestrator | 2025-08-29 19:33:51.092356 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-08-29 19:33:51.092366 | orchestrator | Friday 29 August 2025 19:22:48 +0000 (0:00:00.897) 0:00:28.050 ********* 2025-08-29 19:33:51.092377 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.092388 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.092399 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.092410 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.092421 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.092431 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.092442 | orchestrator | 2025-08-29 19:33:51.092453 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-08-29 19:33:51.092464 | orchestrator | Friday 29 August 2025 19:22:49 +0000 (0:00:00.717) 0:00:28.768 ********* 2025-08-29 19:33:51.092474 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.092485 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.092496 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.092507 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.092517 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.092528 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.092539 | orchestrator | 2025-08-29 19:33:51.092550 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-08-29 19:33:51.092561 | orchestrator | Friday 29 August 2025 19:22:50 +0000 (0:00:00.679) 0:00:29.447 ********* 2025-08-29 19:33:51.092572 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.092583 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.092594 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.092604 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.092615 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.092626 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.092637 | orchestrator | 2025-08-29 19:33:51.092647 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-08-29 19:33:51.092665 | orchestrator | Friday 29 August 2025 19:22:50 +0000 (0:00:00.682) 0:00:30.129 ********* 2025-08-29 19:33:51.092678 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--159b9ed4--8d08--5970--86a8--bd63a32380d6-osd--block--159b9ed4--8d08--5970--86a8--bd63a32380d6', 'dm-uuid-LVM-t4EDXhx402ZcE3z2KFlslw8sRuG7oKTbYGF2vURi18WcU2XTQv4lDqBSw9WnxGlH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.092704 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--338f76e1--8833--5be4--9943--9980bb5050e8-osd--block--338f76e1--8833--5be4--9943--9980bb5050e8', 'dm-uuid-LVM-iKHmePWtKLB5mUYv1rfhhXzkUTyAb52paGjQE2Orfi1AoP63rLVzgZAC6PWtzkkW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.092723 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.092736 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.092747 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f946ce78--a8de--59ba--8bf5--045c292b6708-osd--block--f946ce78--a8de--59ba--8bf5--045c292b6708', 'dm-uuid-LVM-K5OisVE7MwbmJZfp6cO3yPv8VG5rk33hfjM3DCpRmgHNUXElf2VldbuyuNjKsvvv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.092759 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9d878572--29ec--5c6d--9e5c--f341c26bb0e1-osd--block--9d878572--29ec--5c6d--9e5c--f341c26bb0e1', 'dm-uuid-LVM-4DTR1TLZAfcyRf3R1a2hjz4yMdW41t7ej2slpNLAsLSSY1atWK0gONQetfswSQFR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.092821 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.092842 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.092853 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.092865 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.092881 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.092899 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.092911 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.092922 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.092933 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.092945 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.092956 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d29334ae--dac4--5c8b--9540--76ee60da5ca1-osd--block--d29334ae--dac4--5c8b--9540--76ee60da5ca1', 'dm-uuid-LVM-M7Pznd4vqBN3cdcw7Ka3CMD3cUWktfFuNeBE1p6IEPFdWlZwUMJkYq5Ucj5sGb8T'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.092975 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.092986 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--916dc454--8beb--55d0--b00a--22c96f7025a6-osd--block--916dc454--8beb--55d0--b00a--22c96f7025a6', 'dm-uuid-LVM-zXNl7P21uuZCQHc5oyNdERO4Q6IdPHUAph5oeYzpjdh5dsj1D2Cg3wgPNmI1KrtQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.093003 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.093021 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.093033 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.093044 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.093056 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.093067 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.093084 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.093113 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd', 'scsi-SQEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 19:33:51.093129 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.093141 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.093152 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.093170 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f946ce78--a8de--59ba--8bf5--045c292b6708-osd--block--f946ce78--a8de--59ba--8bf5--045c292b6708'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LLHbGs-EyvY-Y1o1-DvDv-Qp0y-rP5z-cuRGsu', 'scsi-0QEMU_QEMU_HARDDISK_5f9ac8f7-ded0-451e-9523-765e677fc5e6', 'scsi-SQEMU_QEMU_HARDDISK_5f9ac8f7-ded0-451e-9523-765e677fc5e6'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 19:33:51.093184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.093195 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.093206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.093230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.093273 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9', 'scsi-SQEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9-part1', 'scsi-SQEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9-part14', 'scsi-SQEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9-part15', 'scsi-SQEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9-part16', 'scsi-SQEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 19:33:51.093303 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--9d878572--29ec--5c6d--9e5c--f341c26bb0e1-osd--block--9d878572--29ec--5c6d--9e5c--f341c26bb0e1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9KieA2-8dIZ-S4XF-J4Dk-bz8s-vZ0D-4QydRe', 'scsi-0QEMU_QEMU_HARDDISK_b185a2fd-fb6c-4818-b874-6a265721bd32', 'scsi-SQEMU_QEMU_HARDDISK_b185a2fd-fb6c-4818-b874-6a265721bd32'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 19:33:51.093320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.093344 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d29334ae--dac4--5c8b--9540--76ee60da5ca1-osd--block--d29334ae--dac4--5c8b--9540--76ee60da5ca1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sBwl3V-PCyv-qHlY-COea-GaUo-WyS0-3jDzp6', 'scsi-0QEMU_QEMU_HARDDISK_2e78d78f-49e6-4011-b2d5-d2a9d6a8ed1c', 'scsi-SQEMU_QEMU_HARDDISK_2e78d78f-49e6-4011-b2d5-d2a9d6a8ed1c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 19:33:51.093363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.093373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.093384 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--916dc454--8beb--55d0--b00a--22c96f7025a6-osd--block--916dc454--8beb--55d0--b00a--22c96f7025a6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EHNyYv-2uKH-imfw-3hdf-kdGr-eLBb-oNVihd', 'scsi-0QEMU_QEMU_HARDDISK_4acdbd50-4373-4301-8b9f-e7658d09fe80', 'scsi-SQEMU_QEMU_HARDDISK_4acdbd50-4373-4301-8b9f-e7658d09fe80'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 19:33:51.093402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.093413 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53b41fa5-6534-4646-b4f8-3662ac98ea03', 'scsi-SQEMU_QEMU_HARDDISK_53b41fa5-6534-4646-b4f8-3662ac98ea03'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 19:33:51.093425 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a237b6df-fa80-49e2-8f79-019305f27c2d', 'scsi-SQEMU_QEMU_HARDDISK_a237b6df-fa80-49e2-8f79-019305f27c2d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 19:33:51.093435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.093456 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-18-41-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 19:33:51.093468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b3dc65af-678c-4ad0-95f2-4a490e1a0b3a', 'scsi-SQEMU_QEMU_HARDDISK_b3dc65af-678c-4ad0-95f2-4a490e1a0b3a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b3dc65af-678c-4ad0-95f2-4a490e1a0b3a-part1', 'scsi-SQEMU_QEMU_HARDDISK_b3dc65af-678c-4ad0-95f2-4a490e1a0b3a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b3dc65af-678c-4ad0-95f2-4a490e1a0b3a-part14', 'scsi-SQEMU_QEMU_HARDDISK_b3dc65af-678c-4ad0-95f2-4a490e1a0b3a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b3dc65af-678c-4ad0-95f2-4a490e1a0b3a-part15', 'scsi-SQEMU_QEMU_HARDDISK_b3dc65af-678c-4ad0-95f2-4a490e1a0b3a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b3dc65af-678c-4ad0-95f2-4a490e1a0b3a-part16', 'scsi-SQEMU_QEMU_HARDDISK_b3dc65af-678c-4ad0-95f2-4a490e1a0b3a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 19:33:51.093486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-18-41-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 19:33:51.093496 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-18-41-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 19:33:51.093511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.093530 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920', 'scsi-SQEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920-part1', 'scsi-SQEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920-part14', 'scsi-SQEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920-part15', 'scsi-SQEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920-part16', 'scsi-SQEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 19:33:51.093551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.093561 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.093572 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--159b9ed4--8d08--5970--86a8--bd63a32380d6-osd--block--159b9ed4--8d08--5970--86a8--bd63a32380d6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-OlYmTf-Djfa-mdV8-A0hp-DTyx-3eeP-HiTeFQ', 'scsi-0QEMU_QEMU_HARDDISK_0a4c4485-d2ea-4599-9435-e606068873fe', 'scsi-SQEMU_QEMU_HARDDISK_0a4c4485-d2ea-4599-9435-e606068873fe'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 19:33:51.093582 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.093592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.093602 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.093623 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--338f76e1--8833--5be4--9943--9980bb5050e8-osd--block--338f76e1--8833--5be4--9943--9980bb5050e8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-k9vR87-3oXQ-j2rI-QoQR-3p4H-kDuO-MKPLVR', 'scsi-0QEMU_QEMU_HARDDISK_21104e56-4cdf-49d9-91fd-13aff314e467', 'scsi-SQEMU_QEMU_HARDDISK_21104e56-4cdf-49d9-91fd-13aff314e467'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 19:33:51.093634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.093652 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5bba166-d17c-451d-864c-9f74c60a90a3', 'scsi-SQEMU_QEMU_HARDDISK_e5bba166-d17c-451d-864c-9f74c60a90a3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 19:33:51.093662 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-18-41-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 19:33:51.093672 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.093683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.093693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.093703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.093713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.093737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c3606b3-c531-44c5-857d-1ee4d13c4585', 'scsi-SQEMU_QEMU_HARDDISK_4c3606b3-c531-44c5-857d-1ee4d13c4585'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c3606b3-c531-44c5-857d-1ee4d13c4585-part1', 'scsi-SQEMU_QEMU_HARDDISK_4c3606b3-c531-44c5-857d-1ee4d13c4585-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c3606b3-c531-44c5-857d-1ee4d13c4585-part14', 'scsi-SQEMU_QEMU_HARDDISK_4c3606b3-c531-44c5-857d-1ee4d13c4585-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c3606b3-c531-44c5-857d-1ee4d13c4585-part15', 'scsi-SQEMU_QEMU_HARDDISK_4c3606b3-c531-44c5-857d-1ee4d13c4585-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c3606b3-c531-44c5-857d-1ee4d13c4585-part16', 'scsi-SQEMU_QEMU_HARDDISK_4c3606b3-c531-44c5-857d-1ee4d13c4585-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 19:33:51.093755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-18-41-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 19:33:51.093766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.093798 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.093808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.093818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.093833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.093849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.093866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.093876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.093886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:33:51.093897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5dc2865-d12f-434a-a66f-3507e82ce759', 'scsi-SQEMU_QEMU_HARDDISK_d5dc2865-d12f-434a-a66f-3507e82ce759'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5dc2865-d12f-434a-a66f-3507e82ce759-part1', 'scsi-SQEMU_QEMU_HARDDISK_d5dc2865-d12f-434a-a66f-3507e82ce759-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5dc2865-d12f-434a-a66f-3507e82ce759-part14', 'scsi-SQEMU_QEMU_HARDDISK_d5dc2865-d12f-434a-a66f-3507e82ce759-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5dc2865-d12f-434a-a66f-3507e82ce759-part15', 'scsi-SQEMU_QEMU_HARDDISK_d5dc2865-d12f-434a-a66f-3507e82ce759-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5dc2865-d12f-434a-a66f-3507e82ce759-part16', 'scsi-SQEMU_QEMU_HARDDISK_d5dc2865-d12f-434a-a66f-3507e82ce759-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 19:33:51.093913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-18-41-30-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 19:33:51.093930 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.093940 | orchestrator | 2025-08-29 19:33:51.093950 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-08-29 19:33:51.093960 | orchestrator | Friday 29 August 2025 19:22:53 +0000 (0:00:02.274) 0:00:32.404 ********* 2025-08-29 19:33:51.093971 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--159b9ed4--8d08--5970--86a8--bd63a32380d6-osd--block--159b9ed4--8d08--5970--86a8--bd63a32380d6', 'dm-uuid-LVM-t4EDXhx402ZcE3z2KFlslw8sRuG7oKTbYGF2vURi18WcU2XTQv4lDqBSw9WnxGlH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094107 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--338f76e1--8833--5be4--9943--9980bb5050e8-osd--block--338f76e1--8833--5be4--9943--9980bb5050e8', 'dm-uuid-LVM-iKHmePWtKLB5mUYv1rfhhXzkUTyAb52paGjQE2Orfi1AoP63rLVzgZAC6PWtzkkW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094135 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094146 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094161 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f946ce78--a8de--59ba--8bf5--045c292b6708-osd--block--f946ce78--a8de--59ba--8bf5--045c292b6708', 'dm-uuid-LVM-K5OisVE7MwbmJZfp6cO3yPv8VG5rk33hfjM3DCpRmgHNUXElf2VldbuyuNjKsvvv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094188 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094199 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9d878572--29ec--5c6d--9e5c--f341c26bb0e1-osd--block--9d878572--29ec--5c6d--9e5c--f341c26bb0e1', 'dm-uuid-LVM-4DTR1TLZAfcyRf3R1a2hjz4yMdW41t7ej2slpNLAsLSSY1atWK0gONQetfswSQFR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094210 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094220 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094230 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094240 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094260 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094277 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094288 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094297 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094308 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094318 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094332 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094355 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094366 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094377 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920', 'scsi-SQEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920-part1', 'scsi-SQEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920-part14', 'scsi-SQEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920-part15', 'scsi-SQEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920-part16', 'scsi-SQEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094402 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd', 'scsi-SQEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094422 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--159b9ed4--8d08--5970--86a8--bd63a32380d6-osd--block--159b9ed4--8d08--5970--86a8--bd63a32380d6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-OlYmTf-Djfa-mdV8-A0hp-DTyx-3eeP-HiTeFQ', 'scsi-0QEMU_QEMU_HARDDISK_0a4c4485-d2ea-4599-9435-e606068873fe', 'scsi-SQEMU_QEMU_HARDDISK_0a4c4485-d2ea-4599-9435-e606068873fe'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094434 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--338f76e1--8833--5be4--9943--9980bb5050e8-osd--block--338f76e1--8833--5be4--9943--9980bb5050e8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-k9vR87-3oXQ-j2rI-QoQR-3p4H-kDuO-MKPLVR', 'scsi-0QEMU_QEMU_HARDDISK_21104e56-4cdf-49d9-91fd-13aff314e467', 'scsi-SQEMU_QEMU_HARDDISK_21104e56-4cdf-49d9-91fd-13aff314e467'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094472 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5bba166-d17c-451d-864c-9f74c60a90a3', 'scsi-SQEMU_QEMU_HARDDISK_e5bba166-d17c-451d-864c-9f74c60a90a3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094483 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f946ce78--a8de--59ba--8bf5--045c292b6708-osd--block--f946ce78--a8de--59ba--8bf5--045c292b6708'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LLHbGs-EyvY-Y1o1-DvDv-Qp0y-rP5z-cuRGsu', 'scsi-0QEMU_QEMU_HARDDISK_5f9ac8f7-ded0-451e-9523-765e677fc5e6', 'scsi-SQEMU_QEMU_HARDDISK_5f9ac8f7-ded0-451e-9523-765e677fc5e6'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094494 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-18-41-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094505 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d29334ae--dac4--5c8b--9540--76ee60da5ca1-osd--block--d29334ae--dac4--5c8b--9540--76ee60da5ca1', 'dm-uuid-LVM-M7Pznd4vqBN3cdcw7Ka3CMD3cUWktfFuNeBE1p6IEPFdWlZwUMJkYq5Ucj5sGb8T'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094515 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--9d878572--29ec--5c6d--9e5c--f341c26bb0e1-osd--block--9d878572--29ec--5c6d--9e5c--f341c26bb0e1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9KieA2-8dIZ-S4XF-J4Dk-bz8s-vZ0D-4QydRe', 'scsi-0QEMU_QEMU_HARDDISK_b185a2fd-fb6c-4818-b874-6a265721bd32', 'scsi-SQEMU_QEMU_HARDDISK_b185a2fd-fb6c-4818-b874-6a265721bd32'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094541 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--916dc454--8beb--55d0--b00a--22c96f7025a6-osd--block--916dc454--8beb--55d0--b00a--22c96f7025a6', 'dm-uuid-LVM-zXNl7P21uuZCQHc5oyNdERO4Q6IdPHUAph5oeYzpjdh5dsj1D2Cg3wgPNmI1KrtQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094552 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a237b6df-fa80-49e2-8f79-019305f27c2d', 'scsi-SQEMU_QEMU_HARDDISK_a237b6df-fa80-49e2-8f79-019305f27c2d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094562 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094572 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-18-41-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094582 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.094593 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094609 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094640 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094651 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094661 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094671 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094682 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094692 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094711 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094728 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094739 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094750 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9', 'scsi-SQEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9-part1', 'scsi-SQEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9-part14', 'scsi-SQEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9-part15', 'scsi-SQEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9-part16', 'scsi-SQEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094790 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094809 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--d29334ae--dac4--5c8b--9540--76ee60da5ca1-osd--block--d29334ae--dac4--5c8b--9540--76ee60da5ca1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sBwl3V-PCyv-qHlY-COea-GaUo-WyS0-3jDzp6', 'scsi-0QEMU_QEMU_HARDDISK_2e78d78f-49e6-4011-b2d5-d2a9d6a8ed1c', 'scsi-SQEMU_QEMU_HARDDISK_2e78d78f-49e6-4011-b2d5-d2a9d6a8ed1c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094819 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094830 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--916dc454--8beb--55d0--b00a--22c96f7025a6-osd--block--916dc454--8beb--55d0--b00a--22c96f7025a6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EHNyYv-2uKH-imfw-3hdf-kdGr-eLBb-oNVihd', 'scsi-0QEMU_QEMU_HARDDISK_4acdbd50-4373-4301-8b9f-e7658d09fe80', 'scsi-SQEMU_QEMU_HARDDISK_4acdbd50-4373-4301-8b9f-e7658d09fe80'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094841 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53b41fa5-6534-4646-b4f8-3662ac98ea03', 'scsi-SQEMU_QEMU_HARDDISK_53b41fa5-6534-4646-b4f8-3662ac98ea03'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094862 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-18-41-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094878 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094889 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094899 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.094910 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b3dc65af-678c-4ad0-95f2-4a490e1a0b3a', 'scsi-SQEMU_QEMU_HARDDISK_b3dc65af-678c-4ad0-95f2-4a490e1a0b3a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b3dc65af-678c-4ad0-95f2-4a490e1a0b3a-part1', 'scsi-SQEMU_QEMU_HARDDISK_b3dc65af-678c-4ad0-95f2-4a490e1a0b3a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b3dc65af-678c-4ad0-95f2-4a490e1a0b3a-part14', 'scsi-SQEMU_QEMU_HARDDISK_b3dc65af-678c-4ad0-95f2-4a490e1a0b3a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b3dc65af-678c-4ad0-95f2-4a490e1a0b3a-part15', 'scsi-SQEMU_QEMU_HARDDISK_b3dc65af-678c-4ad0-95f2-4a490e1a0b3a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b3dc65af-678c-4ad0-95f2-4a490e1a0b3a-part16', 'scsi-SQEMU_QEMU_HARDDISK_b3dc65af-678c-4ad0-95f2-4a490e1a0b3a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094937 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-18-41-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.094948 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.095002 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.095013 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.095023 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.095039 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.095049 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.095064 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.095074 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.095091 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.095102 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.095113 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c3606b3-c531-44c5-857d-1ee4d13c4585', 'scsi-SQEMU_QEMU_HARDDISK_4c3606b3-c531-44c5-857d-1ee4d13c4585'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c3606b3-c531-44c5-857d-1ee4d13c4585-part1', 'scsi-SQEMU_QEMU_HARDDISK_4c3606b3-c531-44c5-857d-1ee4d13c4585-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c3606b3-c531-44c5-857d-1ee4d13c4585-part14', 'scsi-SQEMU_QEMU_HARDDISK_4c3606b3-c531-44c5-857d-1ee4d13c4585-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c3606b3-c531-44c5-857d-1ee4d13c4585-part15', 'scsi-SQEMU_QEMU_HARDDISK_4c3606b3-c531-44c5-857d-1ee4d13c4585-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c3606b3-c531-44c5-857d-1ee4d13c4585-part16', 'scsi-SQEMU_QEMU_HARDDISK_4c3606b3-c531-44c5-857d-1ee4d13c4585-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.095177 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.095194 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-18-41-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.095204 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.095214 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.095224 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.095234 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.095250 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.095260 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.095282 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.095292 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.095303 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5dc2865-d12f-434a-a66f-3507e82ce759', 'scsi-SQEMU_QEMU_HARDDISK_d5dc2865-d12f-434a-a66f-3507e82ce759'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5dc2865-d12f-434a-a66f-3507e82ce759-part1', 'scsi-SQEMU_QEMU_HARDDISK_d5dc2865-d12f-434a-a66f-3507e82ce759-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5dc2865-d12f-434a-a66f-3507e82ce759-part14', 'scsi-SQEMU_QEMU_HARDDISK_d5dc2865-d12f-434a-a66f-3507e82ce759-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5dc2865-d12f-434a-a66f-3507e82ce759-part15', 'scsi-SQEMU_QEMU_HARDDISK_d5dc2865-d12f-434a-a66f-3507e82ce759-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5dc2865-d12f-434a-a66f-3507e82ce759-part16', 'scsi-SQEMU_QEMU_HARDDISK_d5dc2865-d12f-434a-a66f-3507e82ce759-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.095326 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-18-41-30-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:33:51.095337 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.095347 | orchestrator | 2025-08-29 19:33:51.095357 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-08-29 19:33:51.095367 | orchestrator | Friday 29 August 2025 19:22:54 +0000 (0:00:01.549) 0:00:33.954 ********* 2025-08-29 19:33:51.095382 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.095393 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.095403 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.095412 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.095422 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.095432 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.095441 | orchestrator | 2025-08-29 19:33:51.095451 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-08-29 19:33:51.095461 | orchestrator | Friday 29 August 2025 19:22:55 +0000 (0:00:01.110) 0:00:35.065 ********* 2025-08-29 19:33:51.095470 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.095480 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.095489 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.095499 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.095508 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.095518 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.095527 | orchestrator | 2025-08-29 19:33:51.095537 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-08-29 19:33:51.095547 | orchestrator | Friday 29 August 2025 19:22:56 +0000 (0:00:00.713) 0:00:35.779 ********* 2025-08-29 19:33:51.095556 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.095566 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.095576 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.095585 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.095595 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.095610 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.095620 | orchestrator | 2025-08-29 19:33:51.095630 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-08-29 19:33:51.095639 | orchestrator | Friday 29 August 2025 19:22:57 +0000 (0:00:00.866) 0:00:36.646 ********* 2025-08-29 19:33:51.095649 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.095659 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.095668 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.095677 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.095687 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.095697 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.095706 | orchestrator | 2025-08-29 19:33:51.095716 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-08-29 19:33:51.095725 | orchestrator | Friday 29 August 2025 19:22:58 +0000 (0:00:00.536) 0:00:37.182 ********* 2025-08-29 19:33:51.095735 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.095745 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.095754 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.095764 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.095789 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.095799 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.095809 | orchestrator | 2025-08-29 19:33:51.095819 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-08-29 19:33:51.095828 | orchestrator | Friday 29 August 2025 19:22:58 +0000 (0:00:00.662) 0:00:37.844 ********* 2025-08-29 19:33:51.095838 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.095848 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.095857 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.095867 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.095877 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.095886 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.095896 | orchestrator | 2025-08-29 19:33:51.095905 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-08-29 19:33:51.095915 | orchestrator | Friday 29 August 2025 19:22:59 +0000 (0:00:01.190) 0:00:39.035 ********* 2025-08-29 19:33:51.095925 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-08-29 19:33:51.095935 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-08-29 19:33:51.095945 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-08-29 19:33:51.095955 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-08-29 19:33:51.095965 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-08-29 19:33:51.095974 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-08-29 19:33:51.095984 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-08-29 19:33:51.095993 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-08-29 19:33:51.096003 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 19:33:51.096012 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-08-29 19:33:51.096022 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-08-29 19:33:51.096031 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-08-29 19:33:51.096041 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-08-29 19:33:51.096051 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-08-29 19:33:51.096060 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-08-29 19:33:51.096070 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-08-29 19:33:51.096079 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-08-29 19:33:51.096089 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-08-29 19:33:51.096098 | orchestrator | 2025-08-29 19:33:51.096108 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-08-29 19:33:51.096118 | orchestrator | Friday 29 August 2025 19:23:03 +0000 (0:00:03.895) 0:00:42.930 ********* 2025-08-29 19:33:51.096128 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-08-29 19:33:51.096144 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-08-29 19:33:51.096154 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-08-29 19:33:51.096163 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-08-29 19:33:51.096177 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-08-29 19:33:51.096186 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-08-29 19:33:51.096196 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.096205 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.096215 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-08-29 19:33:51.096225 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-08-29 19:33:51.096240 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-08-29 19:33:51.096250 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 19:33:51.096260 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 19:33:51.096269 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 19:33:51.096279 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.096288 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-08-29 19:33:51.096298 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-08-29 19:33:51.096307 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-08-29 19:33:51.096317 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.096327 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.096336 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-08-29 19:33:51.096346 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-08-29 19:33:51.096356 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-08-29 19:33:51.096365 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.096375 | orchestrator | 2025-08-29 19:33:51.096385 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-08-29 19:33:51.096394 | orchestrator | Friday 29 August 2025 19:23:04 +0000 (0:00:00.838) 0:00:43.769 ********* 2025-08-29 19:33:51.096404 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.096414 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.096423 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.096433 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:33:51.096443 | orchestrator | 2025-08-29 19:33:51.096453 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-08-29 19:33:51.096463 | orchestrator | Friday 29 August 2025 19:23:05 +0000 (0:00:01.243) 0:00:45.013 ********* 2025-08-29 19:33:51.096473 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.096483 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.096492 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.096502 | orchestrator | 2025-08-29 19:33:51.096512 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-08-29 19:33:51.096521 | orchestrator | Friday 29 August 2025 19:23:06 +0000 (0:00:00.506) 0:00:45.519 ********* 2025-08-29 19:33:51.096531 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.096541 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.096550 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.096560 | orchestrator | 2025-08-29 19:33:51.096570 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-08-29 19:33:51.096579 | orchestrator | Friday 29 August 2025 19:23:06 +0000 (0:00:00.391) 0:00:45.911 ********* 2025-08-29 19:33:51.096589 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.096599 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.096608 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.096618 | orchestrator | 2025-08-29 19:33:51.096633 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-08-29 19:33:51.096643 | orchestrator | Friday 29 August 2025 19:23:07 +0000 (0:00:00.717) 0:00:46.628 ********* 2025-08-29 19:33:51.096653 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.096662 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.096672 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.096681 | orchestrator | 2025-08-29 19:33:51.096691 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-08-29 19:33:51.096700 | orchestrator | Friday 29 August 2025 19:23:08 +0000 (0:00:00.726) 0:00:47.355 ********* 2025-08-29 19:33:51.096710 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 19:33:51.096720 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 19:33:51.096730 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 19:33:51.096739 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.096749 | orchestrator | 2025-08-29 19:33:51.096759 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-08-29 19:33:51.096820 | orchestrator | Friday 29 August 2025 19:23:08 +0000 (0:00:00.453) 0:00:47.808 ********* 2025-08-29 19:33:51.096832 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 19:33:51.096842 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 19:33:51.096851 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 19:33:51.096861 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.096870 | orchestrator | 2025-08-29 19:33:51.096880 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-08-29 19:33:51.096890 | orchestrator | Friday 29 August 2025 19:23:09 +0000 (0:00:00.590) 0:00:48.399 ********* 2025-08-29 19:33:51.096900 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 19:33:51.096909 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 19:33:51.096919 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 19:33:51.096928 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.096938 | orchestrator | 2025-08-29 19:33:51.096948 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-08-29 19:33:51.096957 | orchestrator | Friday 29 August 2025 19:23:09 +0000 (0:00:00.687) 0:00:49.086 ********* 2025-08-29 19:33:51.096967 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.096977 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.096991 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.096999 | orchestrator | 2025-08-29 19:33:51.097007 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-08-29 19:33:51.097015 | orchestrator | Friday 29 August 2025 19:23:10 +0000 (0:00:00.438) 0:00:49.525 ********* 2025-08-29 19:33:51.097023 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-08-29 19:33:51.097031 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-08-29 19:33:51.097039 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-08-29 19:33:51.097047 | orchestrator | 2025-08-29 19:33:51.097060 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-08-29 19:33:51.097068 | orchestrator | Friday 29 August 2025 19:23:11 +0000 (0:00:01.187) 0:00:50.713 ********* 2025-08-29 19:33:51.097076 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 19:33:51.097084 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 19:33:51.097092 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 19:33:51.097100 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-08-29 19:33:51.097108 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-08-29 19:33:51.097116 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-08-29 19:33:51.097124 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-08-29 19:33:51.097138 | orchestrator | 2025-08-29 19:33:51.097145 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-08-29 19:33:51.097153 | orchestrator | Friday 29 August 2025 19:23:12 +0000 (0:00:01.073) 0:00:51.786 ********* 2025-08-29 19:33:51.097161 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 19:33:51.097169 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 19:33:51.097177 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 19:33:51.097185 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-08-29 19:33:51.097193 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-08-29 19:33:51.097201 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-08-29 19:33:51.097209 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-08-29 19:33:51.097217 | orchestrator | 2025-08-29 19:33:51.097225 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 19:33:51.097232 | orchestrator | Friday 29 August 2025 19:23:15 +0000 (0:00:02.368) 0:00:54.155 ********* 2025-08-29 19:33:51.097240 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:33:51.097249 | orchestrator | 2025-08-29 19:33:51.097257 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 19:33:51.097264 | orchestrator | Friday 29 August 2025 19:23:16 +0000 (0:00:01.627) 0:00:55.783 ********* 2025-08-29 19:33:51.097273 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:33:51.097281 | orchestrator | 2025-08-29 19:33:51.097288 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 19:33:51.097296 | orchestrator | Friday 29 August 2025 19:23:18 +0000 (0:00:01.583) 0:00:57.366 ********* 2025-08-29 19:33:51.097304 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.097312 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.097320 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.097328 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.097336 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.097344 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.097351 | orchestrator | 2025-08-29 19:33:51.097359 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 19:33:51.097367 | orchestrator | Friday 29 August 2025 19:23:19 +0000 (0:00:01.371) 0:00:58.738 ********* 2025-08-29 19:33:51.097375 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.097383 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.097391 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.097399 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.097407 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.097414 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.097422 | orchestrator | 2025-08-29 19:33:51.097430 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 19:33:51.097438 | orchestrator | Friday 29 August 2025 19:23:20 +0000 (0:00:01.213) 0:00:59.951 ********* 2025-08-29 19:33:51.097446 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.097454 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.097462 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.097470 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.097478 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.097486 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.097493 | orchestrator | 2025-08-29 19:33:51.097501 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 19:33:51.097509 | orchestrator | Friday 29 August 2025 19:23:22 +0000 (0:00:01.573) 0:01:01.525 ********* 2025-08-29 19:33:51.097522 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.097530 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.097538 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.097545 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.097553 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.097561 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.097569 | orchestrator | 2025-08-29 19:33:51.097577 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 19:33:51.097589 | orchestrator | Friday 29 August 2025 19:23:23 +0000 (0:00:00.958) 0:01:02.484 ********* 2025-08-29 19:33:51.097597 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.097605 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.097613 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.097621 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.097629 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.097637 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.097644 | orchestrator | 2025-08-29 19:33:51.097652 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 19:33:51.097665 | orchestrator | Friday 2025-08-29 19:33:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:33:51.097673 | orchestrator | 29 August 2025 19:23:25 +0000 (0:00:01.825) 0:01:04.309 ********* 2025-08-29 19:33:51.097681 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.097689 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.097697 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.097705 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.097713 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.097721 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.097729 | orchestrator | 2025-08-29 19:33:51.097736 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 19:33:51.097744 | orchestrator | Friday 29 August 2025 19:23:26 +0000 (0:00:00.844) 0:01:05.154 ********* 2025-08-29 19:33:51.097752 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.097760 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.097780 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.097788 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.097796 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.097804 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.097812 | orchestrator | 2025-08-29 19:33:51.097820 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 19:33:51.097828 | orchestrator | Friday 29 August 2025 19:23:26 +0000 (0:00:00.839) 0:01:05.993 ********* 2025-08-29 19:33:51.097836 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.097844 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.097852 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.097860 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.097867 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.097875 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.097883 | orchestrator | 2025-08-29 19:33:51.097891 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 19:33:51.097899 | orchestrator | Friday 29 August 2025 19:23:28 +0000 (0:00:01.729) 0:01:07.723 ********* 2025-08-29 19:33:51.097907 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.097915 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.097923 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.097930 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.097938 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.097946 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.097954 | orchestrator | 2025-08-29 19:33:51.097962 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 19:33:51.097970 | orchestrator | Friday 29 August 2025 19:23:30 +0000 (0:00:01.849) 0:01:09.573 ********* 2025-08-29 19:33:51.097978 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.097985 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.097999 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.098007 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.098048 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.098059 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.098067 | orchestrator | 2025-08-29 19:33:51.098075 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 19:33:51.098083 | orchestrator | Friday 29 August 2025 19:23:32 +0000 (0:00:01.567) 0:01:11.141 ********* 2025-08-29 19:33:51.098091 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.098099 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.098107 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.098115 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.098123 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.098130 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.098138 | orchestrator | 2025-08-29 19:33:51.098146 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 19:33:51.098154 | orchestrator | Friday 29 August 2025 19:23:33 +0000 (0:00:01.424) 0:01:12.565 ********* 2025-08-29 19:33:51.098162 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.098170 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.098178 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.098186 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.098193 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.098201 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.098210 | orchestrator | 2025-08-29 19:33:51.098224 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 19:33:51.098237 | orchestrator | Friday 29 August 2025 19:23:34 +0000 (0:00:01.261) 0:01:13.826 ********* 2025-08-29 19:33:51.098251 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.098264 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.098276 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.098289 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.098302 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.098313 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.098327 | orchestrator | 2025-08-29 19:33:51.098340 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 19:33:51.098354 | orchestrator | Friday 29 August 2025 19:23:35 +0000 (0:00:00.841) 0:01:14.668 ********* 2025-08-29 19:33:51.098367 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.098380 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.098393 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.098407 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.098418 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.098426 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.098434 | orchestrator | 2025-08-29 19:33:51.098442 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 19:33:51.098450 | orchestrator | Friday 29 August 2025 19:23:36 +0000 (0:00:01.046) 0:01:15.714 ********* 2025-08-29 19:33:51.098458 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.098466 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.098473 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.098481 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.098495 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.098502 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.098510 | orchestrator | 2025-08-29 19:33:51.098518 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 19:33:51.098526 | orchestrator | Friday 29 August 2025 19:23:37 +0000 (0:00:00.938) 0:01:16.653 ********* 2025-08-29 19:33:51.098534 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.098541 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.098549 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.098557 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.098578 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.098593 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.098601 | orchestrator | 2025-08-29 19:33:51.098609 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 19:33:51.098617 | orchestrator | Friday 29 August 2025 19:23:38 +0000 (0:00:01.319) 0:01:17.973 ********* 2025-08-29 19:33:51.098625 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.098633 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.098641 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.098649 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.098657 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.098664 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.098672 | orchestrator | 2025-08-29 19:33:51.098680 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 19:33:51.098688 | orchestrator | Friday 29 August 2025 19:23:39 +0000 (0:00:00.702) 0:01:18.676 ********* 2025-08-29 19:33:51.098696 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.098704 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.098712 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.098720 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.098728 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.098735 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.098743 | orchestrator | 2025-08-29 19:33:51.098751 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 19:33:51.098759 | orchestrator | Friday 29 August 2025 19:23:40 +0000 (0:00:01.289) 0:01:19.966 ********* 2025-08-29 19:33:51.098767 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.098800 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.098808 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.098816 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.098824 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.098831 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.098839 | orchestrator | 2025-08-29 19:33:51.098847 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-08-29 19:33:51.098855 | orchestrator | Friday 29 August 2025 19:23:41 +0000 (0:00:01.053) 0:01:21.020 ********* 2025-08-29 19:33:51.098862 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:33:51.098870 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:33:51.098878 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:33:51.098886 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:33:51.098893 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:33:51.098901 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:33:51.098909 | orchestrator | 2025-08-29 19:33:51.098917 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-08-29 19:33:51.098925 | orchestrator | Friday 29 August 2025 19:23:43 +0000 (0:00:01.347) 0:01:22.367 ********* 2025-08-29 19:33:51.098932 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:33:51.098940 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:33:51.098948 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:33:51.098956 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:33:51.098963 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:33:51.098971 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:33:51.098979 | orchestrator | 2025-08-29 19:33:51.098987 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-08-29 19:33:51.098995 | orchestrator | Friday 29 August 2025 19:23:45 +0000 (0:00:02.281) 0:01:24.649 ********* 2025-08-29 19:33:51.099003 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:33:51.099011 | orchestrator | 2025-08-29 19:33:51.099019 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-08-29 19:33:51.099027 | orchestrator | Friday 29 August 2025 19:23:46 +0000 (0:00:01.031) 0:01:25.680 ********* 2025-08-29 19:33:51.099035 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.099042 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.099050 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.099063 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.099071 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.099079 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.099086 | orchestrator | 2025-08-29 19:33:51.099094 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-08-29 19:33:51.099102 | orchestrator | Friday 29 August 2025 19:23:47 +0000 (0:00:00.632) 0:01:26.313 ********* 2025-08-29 19:33:51.099110 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.099117 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.099125 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.099133 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.099140 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.099148 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.099156 | orchestrator | 2025-08-29 19:33:51.099164 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-08-29 19:33:51.099171 | orchestrator | Friday 29 August 2025 19:23:47 +0000 (0:00:00.755) 0:01:27.069 ********* 2025-08-29 19:33:51.099179 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 19:33:51.099187 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 19:33:51.099195 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 19:33:51.099203 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 19:33:51.099210 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 19:33:51.099218 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 19:33:51.099231 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 19:33:51.099246 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 19:33:51.099259 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 19:33:51.099273 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 19:33:51.099294 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 19:33:51.099308 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 19:33:51.099322 | orchestrator | 2025-08-29 19:33:51.099336 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-08-29 19:33:51.099350 | orchestrator | Friday 29 August 2025 19:23:49 +0000 (0:00:01.225) 0:01:28.294 ********* 2025-08-29 19:33:51.099360 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:33:51.099368 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:33:51.099381 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:33:51.099394 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:33:51.099408 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:33:51.099421 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:33:51.099433 | orchestrator | 2025-08-29 19:33:51.099446 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-08-29 19:33:51.099458 | orchestrator | Friday 29 August 2025 19:23:50 +0000 (0:00:01.019) 0:01:29.313 ********* 2025-08-29 19:33:51.099472 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.099486 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.099500 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.099513 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.099527 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.099536 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.099544 | orchestrator | 2025-08-29 19:33:51.099554 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-08-29 19:33:51.099568 | orchestrator | Friday 29 August 2025 19:23:50 +0000 (0:00:00.519) 0:01:29.833 ********* 2025-08-29 19:33:51.099596 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.099610 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.099624 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.099637 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.099651 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.099664 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.099678 | orchestrator | 2025-08-29 19:33:51.099687 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-08-29 19:33:51.099695 | orchestrator | Friday 29 August 2025 19:23:51 +0000 (0:00:00.640) 0:01:30.474 ********* 2025-08-29 19:33:51.099703 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.099711 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.099719 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.099726 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.099734 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.099742 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.099750 | orchestrator | 2025-08-29 19:33:51.099757 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-08-29 19:33:51.099765 | orchestrator | Friday 29 August 2025 19:23:51 +0000 (0:00:00.542) 0:01:31.017 ********* 2025-08-29 19:33:51.099822 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:33:51.099831 | orchestrator | 2025-08-29 19:33:51.099839 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-08-29 19:33:51.099847 | orchestrator | Friday 29 August 2025 19:23:52 +0000 (0:00:01.012) 0:01:32.029 ********* 2025-08-29 19:33:51.099855 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.099863 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.099870 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.099878 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.099886 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.099894 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.099902 | orchestrator | 2025-08-29 19:33:51.099910 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-08-29 19:33:51.099918 | orchestrator | Friday 29 August 2025 19:25:09 +0000 (0:01:16.797) 0:02:48.826 ********* 2025-08-29 19:33:51.099926 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 19:33:51.099934 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 19:33:51.099942 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 19:33:51.099949 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.099957 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 19:33:51.099964 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 19:33:51.099970 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 19:33:51.099977 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.099984 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 19:33:51.099990 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 19:33:51.099997 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 19:33:51.100004 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.100011 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 19:33:51.100017 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 19:33:51.100024 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 19:33:51.100035 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.100042 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 19:33:51.100056 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 19:33:51.100062 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 19:33:51.100069 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.100082 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 19:33:51.100088 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 19:33:51.100095 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 19:33:51.100102 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.100108 | orchestrator | 2025-08-29 19:33:51.100115 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-08-29 19:33:51.100121 | orchestrator | Friday 29 August 2025 19:25:10 +0000 (0:00:00.771) 0:02:49.598 ********* 2025-08-29 19:33:51.100128 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.100134 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.100141 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.100148 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.100155 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.100161 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.100168 | orchestrator | 2025-08-29 19:33:51.100174 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-08-29 19:33:51.100181 | orchestrator | Friday 29 August 2025 19:25:11 +0000 (0:00:00.780) 0:02:50.378 ********* 2025-08-29 19:33:51.100187 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.100194 | orchestrator | 2025-08-29 19:33:51.100201 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-08-29 19:33:51.100207 | orchestrator | Friday 29 August 2025 19:25:11 +0000 (0:00:00.126) 0:02:50.505 ********* 2025-08-29 19:33:51.100214 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.100220 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.100227 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.100234 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.100240 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.100247 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.100253 | orchestrator | 2025-08-29 19:33:51.100260 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-08-29 19:33:51.100267 | orchestrator | Friday 29 August 2025 19:25:11 +0000 (0:00:00.587) 0:02:51.092 ********* 2025-08-29 19:33:51.100273 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.100280 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.100286 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.100293 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.100299 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.100306 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.100312 | orchestrator | 2025-08-29 19:33:51.100319 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-08-29 19:33:51.100325 | orchestrator | Friday 29 August 2025 19:25:12 +0000 (0:00:00.807) 0:02:51.900 ********* 2025-08-29 19:33:51.100332 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.100338 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.100345 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.100351 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.100358 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.100365 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.100371 | orchestrator | 2025-08-29 19:33:51.100378 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-08-29 19:33:51.100384 | orchestrator | Friday 29 August 2025 19:25:13 +0000 (0:00:00.686) 0:02:52.586 ********* 2025-08-29 19:33:51.100391 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.100398 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.100404 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.100411 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.100422 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.100429 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.100436 | orchestrator | 2025-08-29 19:33:51.100442 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-08-29 19:33:51.100449 | orchestrator | Friday 29 August 2025 19:25:16 +0000 (0:00:02.634) 0:02:55.220 ********* 2025-08-29 19:33:51.100455 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.100462 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.100468 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.100475 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.100481 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.100488 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.100494 | orchestrator | 2025-08-29 19:33:51.100501 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-08-29 19:33:51.100507 | orchestrator | Friday 29 August 2025 19:25:16 +0000 (0:00:00.604) 0:02:55.825 ********* 2025-08-29 19:33:51.100514 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:33:51.100522 | orchestrator | 2025-08-29 19:33:51.100528 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-08-29 19:33:51.100535 | orchestrator | Friday 29 August 2025 19:25:17 +0000 (0:00:01.173) 0:02:56.998 ********* 2025-08-29 19:33:51.100541 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.100548 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.100555 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.100561 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.100568 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.100574 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.100581 | orchestrator | 2025-08-29 19:33:51.100587 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-08-29 19:33:51.100594 | orchestrator | Friday 29 August 2025 19:25:18 +0000 (0:00:00.745) 0:02:57.743 ********* 2025-08-29 19:33:51.100600 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.100607 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.100616 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.100632 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.100644 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.100654 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.100665 | orchestrator | 2025-08-29 19:33:51.100675 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-08-29 19:33:51.100686 | orchestrator | Friday 29 August 2025 19:25:19 +0000 (0:00:00.631) 0:02:58.375 ********* 2025-08-29 19:33:51.100697 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.100710 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.100722 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.100740 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.100752 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.100759 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.100766 | orchestrator | 2025-08-29 19:33:51.100785 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-08-29 19:33:51.100793 | orchestrator | Friday 29 August 2025 19:25:20 +0000 (0:00:00.799) 0:02:59.174 ********* 2025-08-29 19:33:51.100799 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.100806 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.100813 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.100819 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.100826 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.100832 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.100839 | orchestrator | 2025-08-29 19:33:51.100846 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-08-29 19:33:51.100853 | orchestrator | Friday 29 August 2025 19:25:21 +0000 (0:00:01.100) 0:03:00.275 ********* 2025-08-29 19:33:51.100860 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.100872 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.100879 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.100885 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.100892 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.100898 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.100905 | orchestrator | 2025-08-29 19:33:51.100912 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-08-29 19:33:51.100919 | orchestrator | Friday 29 August 2025 19:25:21 +0000 (0:00:00.641) 0:03:00.917 ********* 2025-08-29 19:33:51.100925 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.100932 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.100939 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.100945 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.100952 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.100958 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.100965 | orchestrator | 2025-08-29 19:33:51.100972 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-08-29 19:33:51.100979 | orchestrator | Friday 29 August 2025 19:25:22 +0000 (0:00:00.778) 0:03:01.696 ********* 2025-08-29 19:33:51.100985 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.100992 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.100998 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.101005 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.101011 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.101018 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.101025 | orchestrator | 2025-08-29 19:33:51.101031 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-08-29 19:33:51.101038 | orchestrator | Friday 29 August 2025 19:25:23 +0000 (0:00:00.568) 0:03:02.264 ********* 2025-08-29 19:33:51.101045 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.101051 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.101058 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.101064 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.101071 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.101078 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.101084 | orchestrator | 2025-08-29 19:33:51.101091 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-08-29 19:33:51.101098 | orchestrator | Friday 29 August 2025 19:25:23 +0000 (0:00:00.752) 0:03:03.017 ********* 2025-08-29 19:33:51.101104 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.101111 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.101118 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.101124 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.101131 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.101137 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.101144 | orchestrator | 2025-08-29 19:33:51.101151 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-08-29 19:33:51.101158 | orchestrator | Friday 29 August 2025 19:25:25 +0000 (0:00:01.257) 0:03:04.274 ********* 2025-08-29 19:33:51.101164 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:33:51.101171 | orchestrator | 2025-08-29 19:33:51.101178 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-08-29 19:33:51.101184 | orchestrator | Friday 29 August 2025 19:25:26 +0000 (0:00:01.410) 0:03:05.685 ********* 2025-08-29 19:33:51.101191 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-08-29 19:33:51.101198 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-08-29 19:33:51.101204 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-08-29 19:33:51.101211 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-08-29 19:33:51.101218 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-08-29 19:33:51.101232 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-08-29 19:33:51.101239 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-08-29 19:33:51.101245 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-08-29 19:33:51.101252 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-08-29 19:33:51.101259 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-08-29 19:33:51.101266 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-08-29 19:33:51.101272 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-08-29 19:33:51.101279 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-08-29 19:33:51.101290 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-08-29 19:33:51.101297 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-08-29 19:33:51.101303 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-08-29 19:33:51.101310 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-08-29 19:33:51.101317 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-08-29 19:33:51.101328 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-08-29 19:33:51.101335 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-08-29 19:33:51.101342 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-08-29 19:33:51.101348 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-08-29 19:33:51.101355 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-08-29 19:33:51.101361 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-08-29 19:33:51.101368 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-08-29 19:33:51.101375 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-08-29 19:33:51.101381 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-08-29 19:33:51.101388 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-08-29 19:33:51.101395 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-08-29 19:33:51.101401 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-08-29 19:33:51.101408 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-08-29 19:33:51.101414 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-08-29 19:33:51.101421 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-08-29 19:33:51.101428 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-08-29 19:33:51.101434 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-08-29 19:33:51.101441 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-08-29 19:33:51.101448 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-08-29 19:33:51.101455 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-08-29 19:33:51.101461 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-08-29 19:33:51.101468 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-08-29 19:33:51.101474 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-08-29 19:33:51.101481 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-08-29 19:33:51.101488 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 19:33:51.101494 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-08-29 19:33:51.101501 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 19:33:51.101508 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-08-29 19:33:51.101514 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 19:33:51.101521 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-08-29 19:33:51.101528 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-08-29 19:33:51.101540 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 19:33:51.101546 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 19:33:51.101553 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-08-29 19:33:51.101560 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-08-29 19:33:51.101566 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 19:33:51.101573 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 19:33:51.101580 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 19:33:51.101587 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 19:33:51.101593 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 19:33:51.101600 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 19:33:51.101606 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 19:33:51.101613 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 19:33:51.101620 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 19:33:51.101626 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 19:33:51.101633 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 19:33:51.101640 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 19:33:51.101646 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 19:33:51.101653 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 19:33:51.101659 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 19:33:51.101666 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 19:33:51.101673 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 19:33:51.101679 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 19:33:51.101689 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 19:33:51.101696 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 19:33:51.101703 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 19:33:51.101710 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 19:33:51.101716 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 19:33:51.101727 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 19:33:51.101734 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-08-29 19:33:51.101740 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 19:33:51.101747 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 19:33:51.101754 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-08-29 19:33:51.101760 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 19:33:51.101767 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 19:33:51.101785 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-08-29 19:33:51.101792 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-08-29 19:33:51.101799 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 19:33:51.101805 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-08-29 19:33:51.101812 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 19:33:51.101819 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 19:33:51.101830 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-08-29 19:33:51.101837 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-08-29 19:33:51.101843 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-08-29 19:33:51.101850 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-08-29 19:33:51.101857 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-08-29 19:33:51.101863 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-08-29 19:33:51.101870 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-08-29 19:33:51.101877 | orchestrator | 2025-08-29 19:33:51.101883 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-08-29 19:33:51.101890 | orchestrator | Friday 29 August 2025 19:25:33 +0000 (0:00:06.514) 0:03:12.199 ********* 2025-08-29 19:33:51.101897 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.101903 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.101910 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.101917 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:33:51.101923 | orchestrator | 2025-08-29 19:33:51.101930 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-08-29 19:33:51.101937 | orchestrator | Friday 29 August 2025 19:25:34 +0000 (0:00:01.096) 0:03:13.295 ********* 2025-08-29 19:33:51.101943 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-08-29 19:33:51.101950 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-08-29 19:33:51.101957 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-08-29 19:33:51.101964 | orchestrator | 2025-08-29 19:33:51.101970 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-08-29 19:33:51.101977 | orchestrator | Friday 29 August 2025 19:25:34 +0000 (0:00:00.774) 0:03:14.070 ********* 2025-08-29 19:33:51.101984 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-08-29 19:33:51.101990 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-08-29 19:33:51.101997 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-08-29 19:33:51.102004 | orchestrator | 2025-08-29 19:33:51.102010 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-08-29 19:33:51.102082 | orchestrator | Friday 29 August 2025 19:25:36 +0000 (0:00:01.715) 0:03:15.785 ********* 2025-08-29 19:33:51.102090 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.102097 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.102104 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.102110 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.102117 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.102124 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.102130 | orchestrator | 2025-08-29 19:33:51.102137 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-08-29 19:33:51.102143 | orchestrator | Friday 29 August 2025 19:25:37 +0000 (0:00:00.589) 0:03:16.375 ********* 2025-08-29 19:33:51.102150 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.102157 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.102163 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.102170 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.102177 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.102183 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.102196 | orchestrator | 2025-08-29 19:33:51.102202 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-08-29 19:33:51.102213 | orchestrator | Friday 29 August 2025 19:25:38 +0000 (0:00:00.810) 0:03:17.186 ********* 2025-08-29 19:33:51.102220 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.102226 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.102233 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.102240 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.102246 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.102253 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.102259 | orchestrator | 2025-08-29 19:33:51.102289 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-08-29 19:33:51.102297 | orchestrator | Friday 29 August 2025 19:25:38 +0000 (0:00:00.501) 0:03:17.688 ********* 2025-08-29 19:33:51.102304 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.102310 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.102317 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.102323 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.102330 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.102336 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.102343 | orchestrator | 2025-08-29 19:33:51.102349 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-08-29 19:33:51.102356 | orchestrator | Friday 29 August 2025 19:25:39 +0000 (0:00:00.690) 0:03:18.379 ********* 2025-08-29 19:33:51.102362 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.102369 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.102376 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.102382 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.102389 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.102395 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.102402 | orchestrator | 2025-08-29 19:33:51.102408 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-08-29 19:33:51.102415 | orchestrator | Friday 29 August 2025 19:25:39 +0000 (0:00:00.725) 0:03:19.105 ********* 2025-08-29 19:33:51.102422 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.102428 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.102435 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.102441 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.102448 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.102454 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.102461 | orchestrator | 2025-08-29 19:33:51.102467 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-08-29 19:33:51.102474 | orchestrator | Friday 29 August 2025 19:25:40 +0000 (0:00:00.765) 0:03:19.870 ********* 2025-08-29 19:33:51.102481 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.102487 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.102494 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.102500 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.102507 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.102513 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.102520 | orchestrator | 2025-08-29 19:33:51.102526 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-08-29 19:33:51.102533 | orchestrator | Friday 29 August 2025 19:25:41 +0000 (0:00:00.765) 0:03:20.636 ********* 2025-08-29 19:33:51.102540 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.102546 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.102553 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.102559 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.102566 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.102572 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.102579 | orchestrator | 2025-08-29 19:33:51.102586 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-08-29 19:33:51.102597 | orchestrator | Friday 29 August 2025 19:25:42 +0000 (0:00:00.560) 0:03:21.196 ********* 2025-08-29 19:33:51.102603 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.102610 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.102616 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.102623 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.102629 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.102636 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.102642 | orchestrator | 2025-08-29 19:33:51.102649 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-08-29 19:33:51.102655 | orchestrator | Friday 29 August 2025 19:25:45 +0000 (0:00:03.002) 0:03:24.199 ********* 2025-08-29 19:33:51.102662 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.102668 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.102675 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.102681 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.102688 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.102694 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.102701 | orchestrator | 2025-08-29 19:33:51.102707 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-08-29 19:33:51.102714 | orchestrator | Friday 29 August 2025 19:25:45 +0000 (0:00:00.876) 0:03:25.076 ********* 2025-08-29 19:33:51.102721 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.102727 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.102734 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.102740 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.102747 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.102753 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.102760 | orchestrator | 2025-08-29 19:33:51.102766 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-08-29 19:33:51.102809 | orchestrator | Friday 29 August 2025 19:25:47 +0000 (0:00:01.257) 0:03:26.334 ********* 2025-08-29 19:33:51.102816 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.102822 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.102829 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.102840 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.102852 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.102866 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.102882 | orchestrator | 2025-08-29 19:33:51.102893 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-08-29 19:33:51.102904 | orchestrator | Friday 29 August 2025 19:25:48 +0000 (0:00:00.891) 0:03:27.226 ********* 2025-08-29 19:33:51.102921 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-08-29 19:33:51.102932 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-08-29 19:33:51.102943 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-08-29 19:33:51.102954 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.102995 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.103008 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.103019 | orchestrator | 2025-08-29 19:33:51.103030 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-08-29 19:33:51.103041 | orchestrator | Friday 29 August 2025 19:25:49 +0000 (0:00:01.101) 0:03:28.328 ********* 2025-08-29 19:33:51.103052 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-08-29 19:33:51.103060 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-08-29 19:33:51.103080 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-08-29 19:33:51.103087 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-08-29 19:33:51.103094 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.103101 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.103108 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-08-29 19:33:51.103115 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-08-29 19:33:51.103122 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.103128 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.103135 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.103141 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.103148 | orchestrator | 2025-08-29 19:33:51.103154 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-08-29 19:33:51.103161 | orchestrator | Friday 29 August 2025 19:25:50 +0000 (0:00:01.386) 0:03:29.714 ********* 2025-08-29 19:33:51.103168 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.103174 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.103181 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.103187 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.103194 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.103201 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.103207 | orchestrator | 2025-08-29 19:33:51.103214 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-08-29 19:33:51.103220 | orchestrator | Friday 29 August 2025 19:25:51 +0000 (0:00:01.174) 0:03:30.888 ********* 2025-08-29 19:33:51.103227 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.103233 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.103240 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.103246 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.103253 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.103260 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.103266 | orchestrator | 2025-08-29 19:33:51.103273 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-08-29 19:33:51.103279 | orchestrator | Friday 29 August 2025 19:25:52 +0000 (0:00:00.891) 0:03:31.780 ********* 2025-08-29 19:33:51.103286 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.103292 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.103299 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.103306 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.103312 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.103319 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.103325 | orchestrator | 2025-08-29 19:33:51.103336 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-08-29 19:33:51.103347 | orchestrator | Friday 29 August 2025 19:25:53 +0000 (0:00:00.935) 0:03:32.715 ********* 2025-08-29 19:33:51.103353 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.103359 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.103365 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.103371 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.103377 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.103383 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.103389 | orchestrator | 2025-08-29 19:33:51.103396 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-08-29 19:33:51.103420 | orchestrator | Friday 29 August 2025 19:25:54 +0000 (0:00:01.327) 0:03:34.042 ********* 2025-08-29 19:33:51.103427 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.103434 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.103440 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.103446 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.103452 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.103458 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.103464 | orchestrator | 2025-08-29 19:33:51.103471 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-08-29 19:33:51.103477 | orchestrator | Friday 29 August 2025 19:25:56 +0000 (0:00:01.198) 0:03:35.241 ********* 2025-08-29 19:33:51.103483 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.103490 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.103496 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.103502 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.103508 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.103514 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.103521 | orchestrator | 2025-08-29 19:33:51.103527 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-08-29 19:33:51.103533 | orchestrator | Friday 29 August 2025 19:25:57 +0000 (0:00:01.197) 0:03:36.438 ********* 2025-08-29 19:33:51.103539 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 19:33:51.103546 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 19:33:51.103552 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 19:33:51.103558 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.103565 | orchestrator | 2025-08-29 19:33:51.103571 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-08-29 19:33:51.103577 | orchestrator | Friday 29 August 2025 19:25:57 +0000 (0:00:00.526) 0:03:36.965 ********* 2025-08-29 19:33:51.103583 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 19:33:51.103589 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 19:33:51.103595 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 19:33:51.103602 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.103608 | orchestrator | 2025-08-29 19:33:51.103614 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-08-29 19:33:51.103620 | orchestrator | Friday 29 August 2025 19:25:58 +0000 (0:00:00.575) 0:03:37.541 ********* 2025-08-29 19:33:51.103627 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 19:33:51.103633 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 19:33:51.103639 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 19:33:51.103645 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.103651 | orchestrator | 2025-08-29 19:33:51.103657 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-08-29 19:33:51.103664 | orchestrator | Friday 29 August 2025 19:25:59 +0000 (0:00:00.746) 0:03:38.287 ********* 2025-08-29 19:33:51.103670 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.103676 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.103682 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.103693 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.103699 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.103705 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.103711 | orchestrator | 2025-08-29 19:33:51.103718 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-08-29 19:33:51.103724 | orchestrator | Friday 29 August 2025 19:26:00 +0000 (0:00:00.993) 0:03:39.280 ********* 2025-08-29 19:33:51.103730 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-08-29 19:33:51.103736 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-08-29 19:33:51.103743 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-08-29 19:33:51.103749 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-08-29 19:33:51.103755 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.103762 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-08-29 19:33:51.103786 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.103798 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-08-29 19:33:51.103805 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.103811 | orchestrator | 2025-08-29 19:33:51.103817 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-08-29 19:33:51.103824 | orchestrator | Friday 29 August 2025 19:26:02 +0000 (0:00:01.971) 0:03:41.251 ********* 2025-08-29 19:33:51.103830 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:33:51.103836 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:33:51.103842 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:33:51.103848 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:33:51.103855 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:33:51.103861 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:33:51.103867 | orchestrator | 2025-08-29 19:33:51.103873 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 19:33:51.103879 | orchestrator | Friday 29 August 2025 19:26:04 +0000 (0:00:02.748) 0:03:44.000 ********* 2025-08-29 19:33:51.103886 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:33:51.103892 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:33:51.103898 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:33:51.103904 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:33:51.103910 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:33:51.103916 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:33:51.103923 | orchestrator | 2025-08-29 19:33:51.103929 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-08-29 19:33:51.103935 | orchestrator | Friday 29 August 2025 19:26:07 +0000 (0:00:02.784) 0:03:46.784 ********* 2025-08-29 19:33:51.103945 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.103951 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.103957 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.103964 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:33:51.103970 | orchestrator | 2025-08-29 19:33:51.103976 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-08-29 19:33:51.103983 | orchestrator | Friday 29 August 2025 19:26:09 +0000 (0:00:01.353) 0:03:48.137 ********* 2025-08-29 19:33:51.104006 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.104013 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.104019 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.104025 | orchestrator | 2025-08-29 19:33:51.104031 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-08-29 19:33:51.104038 | orchestrator | Friday 29 August 2025 19:26:09 +0000 (0:00:00.382) 0:03:48.520 ********* 2025-08-29 19:33:51.104044 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:33:51.104050 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:33:51.104056 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:33:51.104063 | orchestrator | 2025-08-29 19:33:51.104069 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-08-29 19:33:51.104075 | orchestrator | Friday 29 August 2025 19:26:11 +0000 (0:00:01.628) 0:03:50.148 ********* 2025-08-29 19:33:51.104086 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 19:33:51.104092 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 19:33:51.104098 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 19:33:51.104105 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.104111 | orchestrator | 2025-08-29 19:33:51.104121 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-08-29 19:33:51.104132 | orchestrator | Friday 29 August 2025 19:26:12 +0000 (0:00:00.989) 0:03:51.138 ********* 2025-08-29 19:33:51.104142 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.104152 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.104162 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.104172 | orchestrator | 2025-08-29 19:33:51.104183 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-08-29 19:33:51.104194 | orchestrator | Friday 29 August 2025 19:26:12 +0000 (0:00:00.325) 0:03:51.464 ********* 2025-08-29 19:33:51.104205 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.104216 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.104227 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.104234 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-5, testbed-node-4 2025-08-29 19:33:51.104240 | orchestrator | 2025-08-29 19:33:51.104246 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-08-29 19:33:51.104252 | orchestrator | Friday 29 August 2025 19:26:13 +0000 (0:00:00.872) 0:03:52.336 ********* 2025-08-29 19:33:51.104258 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 19:33:51.104265 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 19:33:51.104271 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 19:33:51.104277 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.104283 | orchestrator | 2025-08-29 19:33:51.104289 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-08-29 19:33:51.104295 | orchestrator | Friday 29 August 2025 19:26:13 +0000 (0:00:00.296) 0:03:52.632 ********* 2025-08-29 19:33:51.104301 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.104308 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.104314 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.104320 | orchestrator | 2025-08-29 19:33:51.104326 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-08-29 19:33:51.104333 | orchestrator | Friday 29 August 2025 19:26:13 +0000 (0:00:00.390) 0:03:53.023 ********* 2025-08-29 19:33:51.104339 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.104345 | orchestrator | 2025-08-29 19:33:51.104351 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-08-29 19:33:51.104358 | orchestrator | Friday 29 August 2025 19:26:14 +0000 (0:00:00.220) 0:03:53.244 ********* 2025-08-29 19:33:51.104364 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.104370 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.104376 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.104382 | orchestrator | 2025-08-29 19:33:51.104388 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-08-29 19:33:51.104394 | orchestrator | Friday 29 August 2025 19:26:14 +0000 (0:00:00.334) 0:03:53.579 ********* 2025-08-29 19:33:51.104401 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.104407 | orchestrator | 2025-08-29 19:33:51.104413 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-08-29 19:33:51.104419 | orchestrator | Friday 29 August 2025 19:26:14 +0000 (0:00:00.216) 0:03:53.795 ********* 2025-08-29 19:33:51.104425 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.104432 | orchestrator | 2025-08-29 19:33:51.104438 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-08-29 19:33:51.104444 | orchestrator | Friday 29 August 2025 19:26:14 +0000 (0:00:00.178) 0:03:53.974 ********* 2025-08-29 19:33:51.104456 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.104463 | orchestrator | 2025-08-29 19:33:51.104469 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-08-29 19:33:51.104475 | orchestrator | Friday 29 August 2025 19:26:14 +0000 (0:00:00.107) 0:03:54.081 ********* 2025-08-29 19:33:51.104481 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.104487 | orchestrator | 2025-08-29 19:33:51.104493 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-08-29 19:33:51.104500 | orchestrator | Friday 29 August 2025 19:26:15 +0000 (0:00:00.228) 0:03:54.309 ********* 2025-08-29 19:33:51.104506 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.104512 | orchestrator | 2025-08-29 19:33:51.104518 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-08-29 19:33:51.104528 | orchestrator | Friday 29 August 2025 19:26:15 +0000 (0:00:00.200) 0:03:54.510 ********* 2025-08-29 19:33:51.104534 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 19:33:51.104540 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 19:33:51.104547 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 19:33:51.104553 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.104559 | orchestrator | 2025-08-29 19:33:51.104565 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-08-29 19:33:51.104592 | orchestrator | Friday 29 August 2025 19:26:15 +0000 (0:00:00.505) 0:03:55.016 ********* 2025-08-29 19:33:51.104599 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.104605 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.104611 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.104618 | orchestrator | 2025-08-29 19:33:51.104624 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-08-29 19:33:51.104630 | orchestrator | Friday 29 August 2025 19:26:16 +0000 (0:00:00.442) 0:03:55.458 ********* 2025-08-29 19:33:51.104636 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.104642 | orchestrator | 2025-08-29 19:33:51.104649 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-08-29 19:33:51.104655 | orchestrator | Friday 29 August 2025 19:26:16 +0000 (0:00:00.194) 0:03:55.653 ********* 2025-08-29 19:33:51.104661 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.104667 | orchestrator | 2025-08-29 19:33:51.104673 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-08-29 19:33:51.104679 | orchestrator | Friday 29 August 2025 19:26:16 +0000 (0:00:00.190) 0:03:55.844 ********* 2025-08-29 19:33:51.104685 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.104692 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.104698 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.104704 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:33:51.104710 | orchestrator | 2025-08-29 19:33:51.104716 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-08-29 19:33:51.104723 | orchestrator | Friday 29 August 2025 19:26:17 +0000 (0:00:00.898) 0:03:56.743 ********* 2025-08-29 19:33:51.104729 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.104735 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.104741 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.104747 | orchestrator | 2025-08-29 19:33:51.104754 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-08-29 19:33:51.104760 | orchestrator | Friday 29 August 2025 19:26:17 +0000 (0:00:00.302) 0:03:57.045 ********* 2025-08-29 19:33:51.104766 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:33:51.104808 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:33:51.104814 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:33:51.104821 | orchestrator | 2025-08-29 19:33:51.104827 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-08-29 19:33:51.104833 | orchestrator | Friday 29 August 2025 19:26:19 +0000 (0:00:01.143) 0:03:58.188 ********* 2025-08-29 19:33:51.104844 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 19:33:51.104851 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 19:33:51.104857 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 19:33:51.104863 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.104870 | orchestrator | 2025-08-29 19:33:51.104876 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-08-29 19:33:51.104882 | orchestrator | Friday 29 August 2025 19:26:19 +0000 (0:00:00.797) 0:03:58.986 ********* 2025-08-29 19:33:51.104888 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.104894 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.104901 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.104907 | orchestrator | 2025-08-29 19:33:51.104913 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-08-29 19:33:51.104919 | orchestrator | Friday 29 August 2025 19:26:20 +0000 (0:00:00.290) 0:03:59.276 ********* 2025-08-29 19:33:51.104926 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.104932 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.104938 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.104944 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:33:51.104951 | orchestrator | 2025-08-29 19:33:51.104957 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-08-29 19:33:51.104963 | orchestrator | Friday 29 August 2025 19:26:20 +0000 (0:00:00.851) 0:04:00.127 ********* 2025-08-29 19:33:51.104969 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.104975 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.104982 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.104988 | orchestrator | 2025-08-29 19:33:51.104994 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-08-29 19:33:51.105000 | orchestrator | Friday 29 August 2025 19:26:21 +0000 (0:00:00.314) 0:04:00.442 ********* 2025-08-29 19:33:51.105007 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:33:51.105013 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:33:51.105019 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:33:51.105025 | orchestrator | 2025-08-29 19:33:51.105032 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-08-29 19:33:51.105038 | orchestrator | Friday 29 August 2025 19:26:22 +0000 (0:00:01.442) 0:04:01.885 ********* 2025-08-29 19:33:51.105044 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 19:33:51.105050 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 19:33:51.105056 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 19:33:51.105063 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.105069 | orchestrator | 2025-08-29 19:33:51.105075 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-08-29 19:33:51.105081 | orchestrator | Friday 29 August 2025 19:26:23 +0000 (0:00:00.629) 0:04:02.514 ********* 2025-08-29 19:33:51.105088 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.105094 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.105100 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.105106 | orchestrator | 2025-08-29 19:33:51.105116 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-08-29 19:33:51.105122 | orchestrator | Friday 29 August 2025 19:26:23 +0000 (0:00:00.337) 0:04:02.851 ********* 2025-08-29 19:33:51.105128 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.105135 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.105141 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.105147 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.105153 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.105178 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.105185 | orchestrator | 2025-08-29 19:33:51.105191 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-08-29 19:33:51.105202 | orchestrator | Friday 29 August 2025 19:26:24 +0000 (0:00:00.577) 0:04:03.429 ********* 2025-08-29 19:33:51.105208 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.105214 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.105221 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.105227 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:33:51.105233 | orchestrator | 2025-08-29 19:33:51.105239 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-08-29 19:33:51.105245 | orchestrator | Friday 29 August 2025 19:26:25 +0000 (0:00:01.102) 0:04:04.531 ********* 2025-08-29 19:33:51.105252 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.105258 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.105264 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.105270 | orchestrator | 2025-08-29 19:33:51.105276 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-08-29 19:33:51.105282 | orchestrator | Friday 29 August 2025 19:26:25 +0000 (0:00:00.372) 0:04:04.903 ********* 2025-08-29 19:33:51.105288 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:33:51.105295 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:33:51.105301 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:33:51.105307 | orchestrator | 2025-08-29 19:33:51.105313 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-08-29 19:33:51.105319 | orchestrator | Friday 29 August 2025 19:26:27 +0000 (0:00:01.548) 0:04:06.452 ********* 2025-08-29 19:33:51.105325 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 19:33:51.105331 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 19:33:51.105338 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 19:33:51.105343 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.105349 | orchestrator | 2025-08-29 19:33:51.105354 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-08-29 19:33:51.105360 | orchestrator | Friday 29 August 2025 19:26:27 +0000 (0:00:00.630) 0:04:07.082 ********* 2025-08-29 19:33:51.105365 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.105370 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.105376 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.105381 | orchestrator | 2025-08-29 19:33:51.105386 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-08-29 19:33:51.105392 | orchestrator | 2025-08-29 19:33:51.105397 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 19:33:51.105402 | orchestrator | Friday 29 August 2025 19:26:28 +0000 (0:00:00.588) 0:04:07.671 ********* 2025-08-29 19:33:51.105408 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:33:51.105413 | orchestrator | 2025-08-29 19:33:51.105419 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 19:33:51.105424 | orchestrator | Friday 29 August 2025 19:26:29 +0000 (0:00:00.833) 0:04:08.504 ********* 2025-08-29 19:33:51.105430 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:33:51.105435 | orchestrator | 2025-08-29 19:33:51.105440 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 19:33:51.105446 | orchestrator | Friday 29 August 2025 19:26:29 +0000 (0:00:00.595) 0:04:09.099 ********* 2025-08-29 19:33:51.105451 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.105457 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.105462 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.105467 | orchestrator | 2025-08-29 19:33:51.105473 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 19:33:51.105478 | orchestrator | Friday 29 August 2025 19:26:30 +0000 (0:00:00.779) 0:04:09.879 ********* 2025-08-29 19:33:51.105488 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.105493 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.105499 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.105504 | orchestrator | 2025-08-29 19:33:51.105509 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 19:33:51.105515 | orchestrator | Friday 29 August 2025 19:26:31 +0000 (0:00:00.569) 0:04:10.448 ********* 2025-08-29 19:33:51.105520 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.105525 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.105531 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.105536 | orchestrator | 2025-08-29 19:33:51.105542 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 19:33:51.105547 | orchestrator | Friday 29 August 2025 19:26:31 +0000 (0:00:00.250) 0:04:10.698 ********* 2025-08-29 19:33:51.105552 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.105558 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.105563 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.105568 | orchestrator | 2025-08-29 19:33:51.105574 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 19:33:51.105579 | orchestrator | Friday 29 August 2025 19:26:31 +0000 (0:00:00.260) 0:04:10.959 ********* 2025-08-29 19:33:51.105585 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.105590 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.105596 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.105601 | orchestrator | 2025-08-29 19:33:51.105606 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 19:33:51.105615 | orchestrator | Friday 29 August 2025 19:26:32 +0000 (0:00:00.676) 0:04:11.635 ********* 2025-08-29 19:33:51.105621 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.105626 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.105631 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.105637 | orchestrator | 2025-08-29 19:33:51.105642 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 19:33:51.105647 | orchestrator | Friday 29 August 2025 19:26:32 +0000 (0:00:00.249) 0:04:11.885 ********* 2025-08-29 19:33:51.105667 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.105674 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.105679 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.105684 | orchestrator | 2025-08-29 19:33:51.105690 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 19:33:51.105696 | orchestrator | Friday 29 August 2025 19:26:33 +0000 (0:00:00.484) 0:04:12.369 ********* 2025-08-29 19:33:51.105701 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.105707 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.105712 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.105717 | orchestrator | 2025-08-29 19:33:51.105723 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 19:33:51.105728 | orchestrator | Friday 29 August 2025 19:26:33 +0000 (0:00:00.674) 0:04:13.044 ********* 2025-08-29 19:33:51.105734 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.105739 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.105745 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.105750 | orchestrator | 2025-08-29 19:33:51.105756 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 19:33:51.105761 | orchestrator | Friday 29 August 2025 19:26:34 +0000 (0:00:00.783) 0:04:13.827 ********* 2025-08-29 19:33:51.105767 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.105785 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.105791 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.105796 | orchestrator | 2025-08-29 19:33:51.105801 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 19:33:51.105807 | orchestrator | Friday 29 August 2025 19:26:34 +0000 (0:00:00.267) 0:04:14.095 ********* 2025-08-29 19:33:51.105812 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.105818 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.105827 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.105833 | orchestrator | 2025-08-29 19:33:51.105838 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 19:33:51.105844 | orchestrator | Friday 29 August 2025 19:26:35 +0000 (0:00:00.513) 0:04:14.608 ********* 2025-08-29 19:33:51.105849 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.105855 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.105860 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.105865 | orchestrator | 2025-08-29 19:33:51.105871 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 19:33:51.105876 | orchestrator | Friday 29 August 2025 19:26:35 +0000 (0:00:00.342) 0:04:14.951 ********* 2025-08-29 19:33:51.105882 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.105887 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.105892 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.105898 | orchestrator | 2025-08-29 19:33:51.105903 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 19:33:51.105908 | orchestrator | Friday 29 August 2025 19:26:36 +0000 (0:00:00.390) 0:04:15.342 ********* 2025-08-29 19:33:51.105914 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.105919 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.105924 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.105930 | orchestrator | 2025-08-29 19:33:51.105935 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 19:33:51.105941 | orchestrator | Friday 29 August 2025 19:26:36 +0000 (0:00:00.355) 0:04:15.697 ********* 2025-08-29 19:33:51.105946 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.105951 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.105957 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.105962 | orchestrator | 2025-08-29 19:33:51.105968 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 19:33:51.105973 | orchestrator | Friday 29 August 2025 19:26:37 +0000 (0:00:00.611) 0:04:16.309 ********* 2025-08-29 19:33:51.105978 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.105984 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.105989 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.105994 | orchestrator | 2025-08-29 19:33:51.106000 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 19:33:51.106005 | orchestrator | Friday 29 August 2025 19:26:37 +0000 (0:00:00.321) 0:04:16.630 ********* 2025-08-29 19:33:51.106010 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.106038 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.106044 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.106050 | orchestrator | 2025-08-29 19:33:51.106055 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 19:33:51.106061 | orchestrator | Friday 29 August 2025 19:26:37 +0000 (0:00:00.361) 0:04:16.992 ********* 2025-08-29 19:33:51.106066 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.106071 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.106077 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.106082 | orchestrator | 2025-08-29 19:33:51.106088 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 19:33:51.106093 | orchestrator | Friday 29 August 2025 19:26:38 +0000 (0:00:00.328) 0:04:17.321 ********* 2025-08-29 19:33:51.106098 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.106104 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.106109 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.106114 | orchestrator | 2025-08-29 19:33:51.106120 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-08-29 19:33:51.106125 | orchestrator | Friday 29 August 2025 19:26:38 +0000 (0:00:00.781) 0:04:18.103 ********* 2025-08-29 19:33:51.106131 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.106136 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.106141 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.106147 | orchestrator | 2025-08-29 19:33:51.106156 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-08-29 19:33:51.106162 | orchestrator | Friday 29 August 2025 19:26:39 +0000 (0:00:00.395) 0:04:18.498 ********* 2025-08-29 19:33:51.106173 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:33:51.106178 | orchestrator | 2025-08-29 19:33:51.106184 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-08-29 19:33:51.106189 | orchestrator | Friday 29 August 2025 19:26:39 +0000 (0:00:00.558) 0:04:19.056 ********* 2025-08-29 19:33:51.106195 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.106200 | orchestrator | 2025-08-29 19:33:51.106223 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-08-29 19:33:51.106230 | orchestrator | Friday 29 August 2025 19:26:40 +0000 (0:00:00.436) 0:04:19.492 ********* 2025-08-29 19:33:51.106235 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-08-29 19:33:51.106241 | orchestrator | 2025-08-29 19:33:51.106246 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-08-29 19:33:51.106252 | orchestrator | Friday 29 August 2025 19:26:41 +0000 (0:00:01.177) 0:04:20.670 ********* 2025-08-29 19:33:51.106257 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.106263 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.106268 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.106274 | orchestrator | 2025-08-29 19:33:51.106279 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-08-29 19:33:51.106285 | orchestrator | Friday 29 August 2025 19:26:42 +0000 (0:00:00.836) 0:04:21.506 ********* 2025-08-29 19:33:51.106290 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.106296 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.106301 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.106307 | orchestrator | 2025-08-29 19:33:51.106312 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-08-29 19:33:51.106317 | orchestrator | Friday 29 August 2025 19:26:42 +0000 (0:00:00.373) 0:04:21.880 ********* 2025-08-29 19:33:51.106323 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:33:51.106328 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:33:51.106334 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:33:51.106339 | orchestrator | 2025-08-29 19:33:51.106345 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-08-29 19:33:51.106350 | orchestrator | Friday 29 August 2025 19:26:44 +0000 (0:00:01.277) 0:04:23.157 ********* 2025-08-29 19:33:51.106356 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:33:51.106361 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:33:51.106367 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:33:51.106372 | orchestrator | 2025-08-29 19:33:51.106378 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-08-29 19:33:51.106384 | orchestrator | Friday 29 August 2025 19:26:45 +0000 (0:00:01.114) 0:04:24.272 ********* 2025-08-29 19:33:51.106389 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:33:51.106395 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:33:51.106400 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:33:51.106405 | orchestrator | 2025-08-29 19:33:51.106411 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-08-29 19:33:51.106416 | orchestrator | Friday 29 August 2025 19:26:45 +0000 (0:00:00.714) 0:04:24.987 ********* 2025-08-29 19:33:51.106422 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.106427 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.106433 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.106438 | orchestrator | 2025-08-29 19:33:51.106444 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-08-29 19:33:51.106449 | orchestrator | Friday 29 August 2025 19:26:46 +0000 (0:00:00.694) 0:04:25.681 ********* 2025-08-29 19:33:51.106455 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:33:51.106460 | orchestrator | 2025-08-29 19:33:51.106466 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-08-29 19:33:51.106475 | orchestrator | Friday 29 August 2025 19:26:47 +0000 (0:00:01.183) 0:04:26.865 ********* 2025-08-29 19:33:51.106481 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.106486 | orchestrator | 2025-08-29 19:33:51.106492 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-08-29 19:33:51.106497 | orchestrator | Friday 29 August 2025 19:26:48 +0000 (0:00:00.765) 0:04:27.631 ********* 2025-08-29 19:33:51.106503 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 19:33:51.106508 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 19:33:51.106513 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 19:33:51.106519 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 19:33:51.106524 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-08-29 19:33:51.106530 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 19:33:51.106535 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 19:33:51.106541 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-08-29 19:33:51.106546 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 19:33:51.106552 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-08-29 19:33:51.106557 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-08-29 19:33:51.106563 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-08-29 19:33:51.106568 | orchestrator | 2025-08-29 19:33:51.106574 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-08-29 19:33:51.106579 | orchestrator | Friday 29 August 2025 19:26:52 +0000 (0:00:03.510) 0:04:31.141 ********* 2025-08-29 19:33:51.106585 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:33:51.106590 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:33:51.106596 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:33:51.106601 | orchestrator | 2025-08-29 19:33:51.106607 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-08-29 19:33:51.106612 | orchestrator | Friday 29 August 2025 19:26:53 +0000 (0:00:01.584) 0:04:32.726 ********* 2025-08-29 19:33:51.106618 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.106623 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.106629 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.106634 | orchestrator | 2025-08-29 19:33:51.106639 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-08-29 19:33:51.106648 | orchestrator | Friday 29 August 2025 19:26:53 +0000 (0:00:00.408) 0:04:33.134 ********* 2025-08-29 19:33:51.106654 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.106659 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.106665 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.106721 | orchestrator | 2025-08-29 19:33:51.106727 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-08-29 19:33:51.106733 | orchestrator | Friday 29 August 2025 19:26:54 +0000 (0:00:00.377) 0:04:33.512 ********* 2025-08-29 19:33:51.106738 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:33:51.106762 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:33:51.106781 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:33:51.106787 | orchestrator | 2025-08-29 19:33:51.106793 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-08-29 19:33:51.106798 | orchestrator | Friday 29 August 2025 19:26:56 +0000 (0:00:02.098) 0:04:35.610 ********* 2025-08-29 19:33:51.106804 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:33:51.106809 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:33:51.106815 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:33:51.106820 | orchestrator | 2025-08-29 19:33:51.106825 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-08-29 19:33:51.106831 | orchestrator | Friday 29 August 2025 19:26:57 +0000 (0:00:01.474) 0:04:37.085 ********* 2025-08-29 19:33:51.106836 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.106849 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.106854 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.106860 | orchestrator | 2025-08-29 19:33:51.106865 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-08-29 19:33:51.106870 | orchestrator | Friday 29 August 2025 19:26:58 +0000 (0:00:00.289) 0:04:37.374 ********* 2025-08-29 19:33:51.106876 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:33:51.106881 | orchestrator | 2025-08-29 19:33:51.106887 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-08-29 19:33:51.106892 | orchestrator | Friday 29 August 2025 19:26:58 +0000 (0:00:00.511) 0:04:37.886 ********* 2025-08-29 19:33:51.106897 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.106903 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.106908 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.106914 | orchestrator | 2025-08-29 19:33:51.106919 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-08-29 19:33:51.106924 | orchestrator | Friday 29 August 2025 19:26:59 +0000 (0:00:00.428) 0:04:38.315 ********* 2025-08-29 19:33:51.106930 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.106935 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.106940 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.106946 | orchestrator | 2025-08-29 19:33:51.106951 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-08-29 19:33:51.106957 | orchestrator | Friday 29 August 2025 19:26:59 +0000 (0:00:00.283) 0:04:38.598 ********* 2025-08-29 19:33:51.106962 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:33:51.106968 | orchestrator | 2025-08-29 19:33:51.106973 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-08-29 19:33:51.106978 | orchestrator | Friday 29 August 2025 19:26:59 +0000 (0:00:00.478) 0:04:39.076 ********* 2025-08-29 19:33:51.106984 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:33:51.106989 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:33:51.106994 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:33:51.107000 | orchestrator | 2025-08-29 19:33:51.107005 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-08-29 19:33:51.107011 | orchestrator | Friday 29 August 2025 19:27:02 +0000 (0:00:02.422) 0:04:41.499 ********* 2025-08-29 19:33:51.107016 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:33:51.107021 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:33:51.107027 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:33:51.107032 | orchestrator | 2025-08-29 19:33:51.107038 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-08-29 19:33:51.107043 | orchestrator | Friday 29 August 2025 19:27:03 +0000 (0:00:01.090) 0:04:42.589 ********* 2025-08-29 19:33:51.107048 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:33:51.107054 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:33:51.107059 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:33:51.107064 | orchestrator | 2025-08-29 19:33:51.107070 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-08-29 19:33:51.107075 | orchestrator | Friday 29 August 2025 19:27:05 +0000 (0:00:01.820) 0:04:44.410 ********* 2025-08-29 19:33:51.107080 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:33:51.107086 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:33:51.107091 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:33:51.107097 | orchestrator | 2025-08-29 19:33:51.107102 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-08-29 19:33:51.107107 | orchestrator | Friday 29 August 2025 19:27:07 +0000 (0:00:01.905) 0:04:46.315 ********* 2025-08-29 19:33:51.107113 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:33:51.107118 | orchestrator | 2025-08-29 19:33:51.107124 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-08-29 19:33:51.107133 | orchestrator | Friday 29 August 2025 19:27:07 +0000 (0:00:00.750) 0:04:47.066 ********* 2025-08-29 19:33:51.107139 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-08-29 19:33:51.107144 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.107150 | orchestrator | 2025-08-29 19:33:51.107155 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-08-29 19:33:51.107160 | orchestrator | Friday 29 August 2025 19:27:29 +0000 (0:00:22.005) 0:05:09.071 ********* 2025-08-29 19:33:51.107166 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.107171 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.107177 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.107182 | orchestrator | 2025-08-29 19:33:51.107191 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-08-29 19:33:51.107196 | orchestrator | Friday 29 August 2025 19:27:40 +0000 (0:00:10.382) 0:05:19.454 ********* 2025-08-29 19:33:51.107202 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.107207 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.107213 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.107218 | orchestrator | 2025-08-29 19:33:51.107224 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-08-29 19:33:51.107244 | orchestrator | Friday 29 August 2025 19:27:40 +0000 (0:00:00.316) 0:05:19.770 ********* 2025-08-29 19:33:51.107252 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c1c519a16e1472751b2a5d057e37e0d42ebe1208'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-08-29 19:33:51.107260 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c1c519a16e1472751b2a5d057e37e0d42ebe1208'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-08-29 19:33:51.107267 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c1c519a16e1472751b2a5d057e37e0d42ebe1208'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-08-29 19:33:51.107274 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c1c519a16e1472751b2a5d057e37e0d42ebe1208'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-08-29 19:33:51.107281 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c1c519a16e1472751b2a5d057e37e0d42ebe1208'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-08-29 19:33:51.107287 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__c1c519a16e1472751b2a5d057e37e0d42ebe1208'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__c1c519a16e1472751b2a5d057e37e0d42ebe1208'}])  2025-08-29 19:33:51.107294 | orchestrator | 2025-08-29 19:33:51.107300 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 19:33:51.107311 | orchestrator | Friday 29 August 2025 19:27:55 +0000 (0:00:14.630) 0:05:34.400 ********* 2025-08-29 19:33:51.107317 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.107322 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.107328 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.107333 | orchestrator | 2025-08-29 19:33:51.107338 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-08-29 19:33:51.107344 | orchestrator | Friday 29 August 2025 19:27:55 +0000 (0:00:00.356) 0:05:34.757 ********* 2025-08-29 19:33:51.107349 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:33:51.107355 | orchestrator | 2025-08-29 19:33:51.107360 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-08-29 19:33:51.107365 | orchestrator | Friday 29 August 2025 19:27:56 +0000 (0:00:00.652) 0:05:35.410 ********* 2025-08-29 19:33:51.107371 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.107376 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.107381 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.107387 | orchestrator | 2025-08-29 19:33:51.107392 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-08-29 19:33:51.107398 | orchestrator | Friday 29 August 2025 19:27:56 +0000 (0:00:00.291) 0:05:35.701 ********* 2025-08-29 19:33:51.107403 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.107408 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.107414 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.107419 | orchestrator | 2025-08-29 19:33:51.107424 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-08-29 19:33:51.107430 | orchestrator | Friday 29 August 2025 19:27:56 +0000 (0:00:00.310) 0:05:36.011 ********* 2025-08-29 19:33:51.107435 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 19:33:51.107443 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 19:33:51.107449 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 19:33:51.107454 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.107460 | orchestrator | 2025-08-29 19:33:51.107465 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-08-29 19:33:51.107470 | orchestrator | Friday 29 August 2025 19:27:57 +0000 (0:00:00.603) 0:05:36.614 ********* 2025-08-29 19:33:51.107476 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.107481 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.107500 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.107507 | orchestrator | 2025-08-29 19:33:51.107512 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-08-29 19:33:51.107518 | orchestrator | 2025-08-29 19:33:51.107523 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 19:33:51.107529 | orchestrator | Friday 29 August 2025 19:27:58 +0000 (0:00:00.655) 0:05:37.269 ********* 2025-08-29 19:33:51.107534 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:33:51.107539 | orchestrator | 2025-08-29 19:33:51.107545 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 19:33:51.107550 | orchestrator | Friday 29 August 2025 19:27:58 +0000 (0:00:00.458) 0:05:37.728 ********* 2025-08-29 19:33:51.107555 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:33:51.107561 | orchestrator | 2025-08-29 19:33:51.107566 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 19:33:51.107572 | orchestrator | Friday 29 August 2025 19:27:59 +0000 (0:00:00.461) 0:05:38.190 ********* 2025-08-29 19:33:51.107577 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.107582 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.107588 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.107597 | orchestrator | 2025-08-29 19:33:51.107603 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 19:33:51.107608 | orchestrator | Friday 29 August 2025 19:27:59 +0000 (0:00:00.868) 0:05:39.059 ********* 2025-08-29 19:33:51.107614 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.107619 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.107624 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.107630 | orchestrator | 2025-08-29 19:33:51.107635 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 19:33:51.107641 | orchestrator | Friday 29 August 2025 19:28:00 +0000 (0:00:00.297) 0:05:39.356 ********* 2025-08-29 19:33:51.107646 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.107651 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.107657 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.107662 | orchestrator | 2025-08-29 19:33:51.107667 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 19:33:51.107673 | orchestrator | Friday 29 August 2025 19:28:00 +0000 (0:00:00.294) 0:05:39.651 ********* 2025-08-29 19:33:51.107678 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.107683 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.107689 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.107694 | orchestrator | 2025-08-29 19:33:51.107700 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 19:33:51.107705 | orchestrator | Friday 29 August 2025 19:28:00 +0000 (0:00:00.284) 0:05:39.935 ********* 2025-08-29 19:33:51.107710 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.107716 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.107721 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.107727 | orchestrator | 2025-08-29 19:33:51.107732 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 19:33:51.107737 | orchestrator | Friday 29 August 2025 19:28:01 +0000 (0:00:00.892) 0:05:40.828 ********* 2025-08-29 19:33:51.107743 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.107748 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.107753 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.107759 | orchestrator | 2025-08-29 19:33:51.107764 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 19:33:51.107780 | orchestrator | Friday 29 August 2025 19:28:02 +0000 (0:00:00.340) 0:05:41.168 ********* 2025-08-29 19:33:51.107786 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.107791 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.107797 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.107802 | orchestrator | 2025-08-29 19:33:51.107808 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 19:33:51.107814 | orchestrator | Friday 29 August 2025 19:28:02 +0000 (0:00:00.281) 0:05:41.450 ********* 2025-08-29 19:33:51.107819 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.107825 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.107830 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.107836 | orchestrator | 2025-08-29 19:33:51.107841 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 19:33:51.107847 | orchestrator | Friday 29 August 2025 19:28:03 +0000 (0:00:00.716) 0:05:42.167 ********* 2025-08-29 19:33:51.107852 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.107857 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.107863 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.107868 | orchestrator | 2025-08-29 19:33:51.107874 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 19:33:51.107879 | orchestrator | Friday 29 August 2025 19:28:03 +0000 (0:00:00.919) 0:05:43.086 ********* 2025-08-29 19:33:51.107885 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.107890 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.107896 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.107901 | orchestrator | 2025-08-29 19:33:51.107907 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 19:33:51.107916 | orchestrator | Friday 29 August 2025 19:28:04 +0000 (0:00:00.264) 0:05:43.350 ********* 2025-08-29 19:33:51.107922 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.107927 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.107932 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.107938 | orchestrator | 2025-08-29 19:33:51.107947 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 19:33:51.107953 | orchestrator | Friday 29 August 2025 19:28:04 +0000 (0:00:00.293) 0:05:43.644 ********* 2025-08-29 19:33:51.107958 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.107964 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.107969 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.107975 | orchestrator | 2025-08-29 19:33:51.107980 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 19:33:51.107986 | orchestrator | Friday 29 August 2025 19:28:04 +0000 (0:00:00.300) 0:05:43.944 ********* 2025-08-29 19:33:51.108006 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.108012 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.108018 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.108023 | orchestrator | 2025-08-29 19:33:51.108029 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 19:33:51.108034 | orchestrator | Friday 29 August 2025 19:28:05 +0000 (0:00:00.574) 0:05:44.519 ********* 2025-08-29 19:33:51.108039 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.108045 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.108050 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.108055 | orchestrator | 2025-08-29 19:33:51.108061 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 19:33:51.108066 | orchestrator | Friday 29 August 2025 19:28:05 +0000 (0:00:00.332) 0:05:44.851 ********* 2025-08-29 19:33:51.108072 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.108077 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.108082 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.108088 | orchestrator | 2025-08-29 19:33:51.108093 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 19:33:51.108098 | orchestrator | Friday 29 August 2025 19:28:06 +0000 (0:00:00.306) 0:05:45.157 ********* 2025-08-29 19:33:51.108104 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.108109 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.108114 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.108120 | orchestrator | 2025-08-29 19:33:51.108125 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 19:33:51.108131 | orchestrator | Friday 29 August 2025 19:28:06 +0000 (0:00:00.345) 0:05:45.503 ********* 2025-08-29 19:33:51.108136 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.108142 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.108147 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.108152 | orchestrator | 2025-08-29 19:33:51.108158 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 19:33:51.108163 | orchestrator | Friday 29 August 2025 19:28:06 +0000 (0:00:00.335) 0:05:45.838 ********* 2025-08-29 19:33:51.108169 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.108174 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.108179 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.108185 | orchestrator | 2025-08-29 19:33:51.108190 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 19:33:51.108195 | orchestrator | Friday 29 August 2025 19:28:07 +0000 (0:00:00.609) 0:05:46.448 ********* 2025-08-29 19:33:51.108201 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.108206 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.108211 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.108217 | orchestrator | 2025-08-29 19:33:51.108222 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-08-29 19:33:51.108228 | orchestrator | Friday 29 August 2025 19:28:07 +0000 (0:00:00.564) 0:05:47.013 ********* 2025-08-29 19:33:51.108237 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 19:33:51.108243 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 19:33:51.108249 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 19:33:51.108254 | orchestrator | 2025-08-29 19:33:51.108259 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-08-29 19:33:51.108265 | orchestrator | Friday 29 August 2025 19:28:08 +0000 (0:00:00.888) 0:05:47.901 ********* 2025-08-29 19:33:51.108270 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:33:51.108275 | orchestrator | 2025-08-29 19:33:51.108281 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-08-29 19:33:51.108286 | orchestrator | Friday 29 August 2025 19:28:09 +0000 (0:00:00.848) 0:05:48.750 ********* 2025-08-29 19:33:51.108292 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:33:51.108297 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:33:51.108302 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:33:51.108308 | orchestrator | 2025-08-29 19:33:51.108313 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-08-29 19:33:51.108319 | orchestrator | Friday 29 August 2025 19:28:10 +0000 (0:00:00.718) 0:05:49.468 ********* 2025-08-29 19:33:51.108324 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.108329 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.108334 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.108340 | orchestrator | 2025-08-29 19:33:51.108346 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-08-29 19:33:51.108351 | orchestrator | Friday 29 August 2025 19:28:10 +0000 (0:00:00.376) 0:05:49.845 ********* 2025-08-29 19:33:51.108356 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 19:33:51.108362 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 19:33:51.108376 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 19:33:51.108382 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-08-29 19:33:51.108387 | orchestrator | 2025-08-29 19:33:51.108393 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-08-29 19:33:51.108398 | orchestrator | Friday 29 August 2025 19:28:21 +0000 (0:00:10.690) 0:06:00.535 ********* 2025-08-29 19:33:51.108403 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.108409 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.108414 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.108420 | orchestrator | 2025-08-29 19:33:51.108425 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-08-29 19:33:51.108431 | orchestrator | Friday 29 August 2025 19:28:22 +0000 (0:00:00.618) 0:06:01.153 ********* 2025-08-29 19:33:51.108436 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-08-29 19:33:51.108442 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-08-29 19:33:51.108447 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-08-29 19:33:51.108453 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-08-29 19:33:51.108458 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 19:33:51.108479 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 19:33:51.108485 | orchestrator | 2025-08-29 19:33:51.108491 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-08-29 19:33:51.108551 | orchestrator | Friday 29 August 2025 19:28:24 +0000 (0:00:02.130) 0:06:03.284 ********* 2025-08-29 19:33:51.108571 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-08-29 19:33:51.108576 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-08-29 19:33:51.108582 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-08-29 19:33:51.108587 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 19:33:51.108593 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-08-29 19:33:51.108602 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-08-29 19:33:51.108608 | orchestrator | 2025-08-29 19:33:51.108613 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-08-29 19:33:51.108618 | orchestrator | Friday 29 August 2025 19:28:25 +0000 (0:00:01.181) 0:06:04.466 ********* 2025-08-29 19:33:51.108624 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.108629 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.108635 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.108640 | orchestrator | 2025-08-29 19:33:51.108645 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-08-29 19:33:51.108651 | orchestrator | Friday 29 August 2025 19:28:26 +0000 (0:00:00.677) 0:06:05.143 ********* 2025-08-29 19:33:51.108660 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.108668 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.108677 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.108691 | orchestrator | 2025-08-29 19:33:51.108700 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-08-29 19:33:51.108709 | orchestrator | Friday 29 August 2025 19:28:26 +0000 (0:00:00.316) 0:06:05.460 ********* 2025-08-29 19:33:51.108717 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.108726 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.108734 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.108742 | orchestrator | 2025-08-29 19:33:51.108749 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-08-29 19:33:51.108758 | orchestrator | Friday 29 August 2025 19:28:26 +0000 (0:00:00.575) 0:06:06.036 ********* 2025-08-29 19:33:51.108766 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:33:51.108814 | orchestrator | 2025-08-29 19:33:51.108823 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-08-29 19:33:51.108832 | orchestrator | Friday 29 August 2025 19:28:27 +0000 (0:00:00.526) 0:06:06.563 ********* 2025-08-29 19:33:51.108841 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.108850 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.108859 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.108867 | orchestrator | 2025-08-29 19:33:51.108876 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-08-29 19:33:51.108882 | orchestrator | Friday 29 August 2025 19:28:27 +0000 (0:00:00.415) 0:06:06.978 ********* 2025-08-29 19:33:51.108887 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.108893 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.108898 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.108903 | orchestrator | 2025-08-29 19:33:51.108909 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-08-29 19:33:51.108915 | orchestrator | Friday 29 August 2025 19:28:28 +0000 (0:00:00.581) 0:06:07.559 ********* 2025-08-29 19:33:51.108920 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:33:51.108926 | orchestrator | 2025-08-29 19:33:51.108931 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-08-29 19:33:51.108936 | orchestrator | Friday 29 August 2025 19:28:28 +0000 (0:00:00.507) 0:06:08.066 ********* 2025-08-29 19:33:51.108942 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:33:51.108947 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:33:51.108953 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:33:51.108958 | orchestrator | 2025-08-29 19:33:51.108963 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-08-29 19:33:51.108969 | orchestrator | Friday 29 August 2025 19:28:30 +0000 (0:00:01.161) 0:06:09.228 ********* 2025-08-29 19:33:51.108974 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:33:51.108980 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:33:51.108985 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:33:51.108990 | orchestrator | 2025-08-29 19:33:51.109002 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-08-29 19:33:51.109008 | orchestrator | Friday 29 August 2025 19:28:31 +0000 (0:00:01.678) 0:06:10.906 ********* 2025-08-29 19:33:51.109013 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:33:51.109019 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:33:51.109024 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:33:51.109029 | orchestrator | 2025-08-29 19:33:51.109035 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-08-29 19:33:51.109040 | orchestrator | Friday 29 August 2025 19:28:33 +0000 (0:00:01.743) 0:06:12.649 ********* 2025-08-29 19:33:51.109046 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:33:51.109051 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:33:51.109056 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:33:51.109062 | orchestrator | 2025-08-29 19:33:51.109067 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-08-29 19:33:51.109073 | orchestrator | Friday 29 August 2025 19:28:35 +0000 (0:00:02.057) 0:06:14.707 ********* 2025-08-29 19:33:51.109082 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.109088 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.109093 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-08-29 19:33:51.109099 | orchestrator | 2025-08-29 19:33:51.109104 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-08-29 19:33:51.109110 | orchestrator | Friday 29 August 2025 19:28:36 +0000 (0:00:00.463) 0:06:15.170 ********* 2025-08-29 19:33:51.109142 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-08-29 19:33:51.109149 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-08-29 19:33:51.109154 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-08-29 19:33:51.109160 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-08-29 19:33:51.109165 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-08-29 19:33:51.109171 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-08-29 19:33:51.109176 | orchestrator | 2025-08-29 19:33:51.109182 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-08-29 19:33:51.109187 | orchestrator | Friday 29 August 2025 19:29:06 +0000 (0:00:30.877) 0:06:46.048 ********* 2025-08-29 19:33:51.109192 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-08-29 19:33:51.109198 | orchestrator | 2025-08-29 19:33:51.109203 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-08-29 19:33:51.109208 | orchestrator | Friday 29 August 2025 19:29:08 +0000 (0:00:01.290) 0:06:47.338 ********* 2025-08-29 19:33:51.109214 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.109219 | orchestrator | 2025-08-29 19:33:51.109224 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-08-29 19:33:51.109230 | orchestrator | Friday 29 August 2025 19:29:08 +0000 (0:00:00.315) 0:06:47.654 ********* 2025-08-29 19:33:51.109235 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.109240 | orchestrator | 2025-08-29 19:33:51.109246 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-08-29 19:33:51.109251 | orchestrator | Friday 29 August 2025 19:29:08 +0000 (0:00:00.154) 0:06:47.808 ********* 2025-08-29 19:33:51.109256 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-08-29 19:33:51.109262 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-08-29 19:33:51.109267 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-08-29 19:33:51.109272 | orchestrator | 2025-08-29 19:33:51.109277 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-08-29 19:33:51.109287 | orchestrator | Friday 29 August 2025 19:29:15 +0000 (0:00:06.500) 0:06:54.309 ********* 2025-08-29 19:33:51.109293 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-08-29 19:33:51.109298 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-08-29 19:33:51.109304 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-08-29 19:33:51.109309 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-08-29 19:33:51.109314 | orchestrator | 2025-08-29 19:33:51.109320 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 19:33:51.109325 | orchestrator | Friday 29 August 2025 19:29:19 +0000 (0:00:04.687) 0:06:58.996 ********* 2025-08-29 19:33:51.109330 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:33:51.109336 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:33:51.109345 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:33:51.109354 | orchestrator | 2025-08-29 19:33:51.109362 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-08-29 19:33:51.109370 | orchestrator | Friday 29 August 2025 19:29:20 +0000 (0:00:00.978) 0:06:59.975 ********* 2025-08-29 19:33:51.109378 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:33:51.109386 | orchestrator | 2025-08-29 19:33:51.109395 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-08-29 19:33:51.109402 | orchestrator | Friday 29 August 2025 19:29:21 +0000 (0:00:00.530) 0:07:00.505 ********* 2025-08-29 19:33:51.109407 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.109412 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.109417 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.109421 | orchestrator | 2025-08-29 19:33:51.109426 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-08-29 19:33:51.109431 | orchestrator | Friday 29 August 2025 19:29:21 +0000 (0:00:00.332) 0:07:00.838 ********* 2025-08-29 19:33:51.109436 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:33:51.109441 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:33:51.109446 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:33:51.109451 | orchestrator | 2025-08-29 19:33:51.109456 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-08-29 19:33:51.109461 | orchestrator | Friday 29 August 2025 19:29:23 +0000 (0:00:01.500) 0:07:02.338 ********* 2025-08-29 19:33:51.109465 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 19:33:51.109470 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 19:33:51.109475 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 19:33:51.109480 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.109485 | orchestrator | 2025-08-29 19:33:51.109490 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-08-29 19:33:51.109494 | orchestrator | Friday 29 August 2025 19:29:23 +0000 (0:00:00.651) 0:07:02.990 ********* 2025-08-29 19:33:51.109503 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.109507 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.109512 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.109517 | orchestrator | 2025-08-29 19:33:51.109522 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-08-29 19:33:51.109527 | orchestrator | 2025-08-29 19:33:51.109532 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 19:33:51.109536 | orchestrator | Friday 29 August 2025 19:29:24 +0000 (0:00:00.638) 0:07:03.629 ********* 2025-08-29 19:33:51.109558 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:33:51.109564 | orchestrator | 2025-08-29 19:33:51.109569 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 19:33:51.109574 | orchestrator | Friday 29 August 2025 19:29:25 +0000 (0:00:00.806) 0:07:04.435 ********* 2025-08-29 19:33:51.109583 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:33:51.109588 | orchestrator | 2025-08-29 19:33:51.109592 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 19:33:51.109597 | orchestrator | Friday 29 August 2025 19:29:25 +0000 (0:00:00.551) 0:07:04.986 ********* 2025-08-29 19:33:51.109602 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.109607 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.109612 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.109616 | orchestrator | 2025-08-29 19:33:51.109621 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 19:33:51.109626 | orchestrator | Friday 29 August 2025 19:29:26 +0000 (0:00:00.301) 0:07:05.288 ********* 2025-08-29 19:33:51.109631 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.109636 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.109641 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.109645 | orchestrator | 2025-08-29 19:33:51.109650 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 19:33:51.109655 | orchestrator | Friday 29 August 2025 19:29:27 +0000 (0:00:01.003) 0:07:06.291 ********* 2025-08-29 19:33:51.109660 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.109665 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.109670 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.109675 | orchestrator | 2025-08-29 19:33:51.109679 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 19:33:51.109684 | orchestrator | Friday 29 August 2025 19:29:27 +0000 (0:00:00.775) 0:07:07.066 ********* 2025-08-29 19:33:51.109689 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.109694 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.109699 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.109704 | orchestrator | 2025-08-29 19:33:51.109708 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 19:33:51.109713 | orchestrator | Friday 29 August 2025 19:29:28 +0000 (0:00:00.798) 0:07:07.865 ********* 2025-08-29 19:33:51.109718 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.109723 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.109728 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.109732 | orchestrator | 2025-08-29 19:33:51.109737 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 19:33:51.109742 | orchestrator | Friday 29 August 2025 19:29:29 +0000 (0:00:00.312) 0:07:08.177 ********* 2025-08-29 19:33:51.109747 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.109752 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.109757 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.109761 | orchestrator | 2025-08-29 19:33:51.109766 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 19:33:51.109787 | orchestrator | Friday 29 August 2025 19:29:29 +0000 (0:00:00.633) 0:07:08.811 ********* 2025-08-29 19:33:51.109795 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.109802 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.109810 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.109815 | orchestrator | 2025-08-29 19:33:51.109820 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 19:33:51.109825 | orchestrator | Friday 29 August 2025 19:29:30 +0000 (0:00:00.357) 0:07:09.168 ********* 2025-08-29 19:33:51.109830 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.109835 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.109840 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.109844 | orchestrator | 2025-08-29 19:33:51.109849 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 19:33:51.109854 | orchestrator | Friday 29 August 2025 19:29:30 +0000 (0:00:00.732) 0:07:09.901 ********* 2025-08-29 19:33:51.109859 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.109864 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.109873 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.109877 | orchestrator | 2025-08-29 19:33:51.109883 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 19:33:51.109892 | orchestrator | Friday 29 August 2025 19:29:31 +0000 (0:00:00.787) 0:07:10.688 ********* 2025-08-29 19:33:51.109899 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.109906 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.109914 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.109936 | orchestrator | 2025-08-29 19:33:51.109944 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 19:33:51.109952 | orchestrator | Friday 29 August 2025 19:29:32 +0000 (0:00:00.573) 0:07:11.261 ********* 2025-08-29 19:33:51.109959 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.109968 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.109976 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.109984 | orchestrator | 2025-08-29 19:33:51.109993 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 19:33:51.109998 | orchestrator | Friday 29 August 2025 19:29:32 +0000 (0:00:00.315) 0:07:11.577 ********* 2025-08-29 19:33:51.110003 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.110008 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.110032 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.110038 | orchestrator | 2025-08-29 19:33:51.110043 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 19:33:51.110051 | orchestrator | Friday 29 August 2025 19:29:32 +0000 (0:00:00.315) 0:07:11.892 ********* 2025-08-29 19:33:51.110056 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.110061 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.110066 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.110071 | orchestrator | 2025-08-29 19:33:51.110075 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 19:33:51.110080 | orchestrator | Friday 29 August 2025 19:29:33 +0000 (0:00:00.357) 0:07:12.250 ********* 2025-08-29 19:33:51.110085 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.110090 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.110099 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.110104 | orchestrator | 2025-08-29 19:33:51.110108 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 19:33:51.110113 | orchestrator | Friday 29 August 2025 19:29:33 +0000 (0:00:00.599) 0:07:12.850 ********* 2025-08-29 19:33:51.110118 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.110123 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.110128 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.110132 | orchestrator | 2025-08-29 19:33:51.110137 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 19:33:51.110142 | orchestrator | Friday 29 August 2025 19:29:34 +0000 (0:00:00.325) 0:07:13.175 ********* 2025-08-29 19:33:51.110147 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.110152 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.110156 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.110161 | orchestrator | 2025-08-29 19:33:51.110166 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 19:33:51.110171 | orchestrator | Friday 29 August 2025 19:29:34 +0000 (0:00:00.301) 0:07:13.476 ********* 2025-08-29 19:33:51.110176 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.110180 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.110185 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.110190 | orchestrator | 2025-08-29 19:33:51.110195 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 19:33:51.110199 | orchestrator | Friday 29 August 2025 19:29:34 +0000 (0:00:00.319) 0:07:13.796 ********* 2025-08-29 19:33:51.110204 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.110209 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.110214 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.110219 | orchestrator | 2025-08-29 19:33:51.110224 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 19:33:51.110233 | orchestrator | Friday 29 August 2025 19:29:35 +0000 (0:00:00.629) 0:07:14.426 ********* 2025-08-29 19:33:51.110238 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.110242 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.110247 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.110252 | orchestrator | 2025-08-29 19:33:51.110257 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-08-29 19:33:51.110262 | orchestrator | Friday 29 August 2025 19:29:35 +0000 (0:00:00.594) 0:07:15.020 ********* 2025-08-29 19:33:51.110266 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.110271 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.110276 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.110281 | orchestrator | 2025-08-29 19:33:51.110285 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-08-29 19:33:51.110290 | orchestrator | Friday 29 August 2025 19:29:36 +0000 (0:00:00.336) 0:07:15.356 ********* 2025-08-29 19:33:51.110295 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 19:33:51.110300 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 19:33:51.110305 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 19:33:51.110309 | orchestrator | 2025-08-29 19:33:51.110314 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-08-29 19:33:51.110319 | orchestrator | Friday 29 August 2025 19:29:37 +0000 (0:00:00.913) 0:07:16.270 ********* 2025-08-29 19:33:51.110324 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:33:51.110329 | orchestrator | 2025-08-29 19:33:51.110333 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-08-29 19:33:51.110338 | orchestrator | Friday 29 August 2025 19:29:37 +0000 (0:00:00.812) 0:07:17.082 ********* 2025-08-29 19:33:51.110343 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.110348 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.110352 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.110357 | orchestrator | 2025-08-29 19:33:51.110362 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-08-29 19:33:51.110367 | orchestrator | Friday 29 August 2025 19:29:38 +0000 (0:00:00.319) 0:07:17.402 ********* 2025-08-29 19:33:51.110372 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.110376 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.110381 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.110386 | orchestrator | 2025-08-29 19:33:51.110391 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-08-29 19:33:51.110395 | orchestrator | Friday 29 August 2025 19:29:38 +0000 (0:00:00.296) 0:07:17.699 ********* 2025-08-29 19:33:51.110400 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.110405 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.110410 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.110414 | orchestrator | 2025-08-29 19:33:51.110419 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-08-29 19:33:51.110424 | orchestrator | Friday 29 August 2025 19:29:39 +0000 (0:00:00.891) 0:07:18.590 ********* 2025-08-29 19:33:51.110429 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.110434 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.110438 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.110443 | orchestrator | 2025-08-29 19:33:51.110448 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-08-29 19:33:51.110453 | orchestrator | Friday 29 August 2025 19:29:39 +0000 (0:00:00.353) 0:07:18.943 ********* 2025-08-29 19:33:51.110458 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-08-29 19:33:51.110465 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-08-29 19:33:51.110473 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-08-29 19:33:51.110478 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-08-29 19:33:51.110483 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-08-29 19:33:51.110492 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-08-29 19:33:51.110498 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-08-29 19:33:51.110503 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-08-29 19:33:51.110507 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-08-29 19:33:51.110512 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-08-29 19:33:51.110517 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-08-29 19:33:51.110522 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-08-29 19:33:51.110527 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-08-29 19:33:51.110531 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-08-29 19:33:51.110536 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-08-29 19:33:51.110541 | orchestrator | 2025-08-29 19:33:51.110546 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-08-29 19:33:51.110550 | orchestrator | Friday 29 August 2025 19:29:42 +0000 (0:00:03.029) 0:07:21.973 ********* 2025-08-29 19:33:51.110555 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.110560 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.110565 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.110569 | orchestrator | 2025-08-29 19:33:51.110574 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-08-29 19:33:51.110579 | orchestrator | Friday 29 August 2025 19:29:43 +0000 (0:00:00.308) 0:07:22.281 ********* 2025-08-29 19:33:51.110584 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:33:51.110589 | orchestrator | 2025-08-29 19:33:51.110593 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-08-29 19:33:51.110598 | orchestrator | Friday 29 August 2025 19:29:43 +0000 (0:00:00.797) 0:07:23.079 ********* 2025-08-29 19:33:51.110603 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-08-29 19:33:51.110608 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-08-29 19:33:51.110612 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-08-29 19:33:51.110617 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-08-29 19:33:51.110622 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-08-29 19:33:51.110627 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-08-29 19:33:51.110632 | orchestrator | 2025-08-29 19:33:51.110636 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-08-29 19:33:51.110641 | orchestrator | Friday 29 August 2025 19:29:44 +0000 (0:00:00.944) 0:07:24.024 ********* 2025-08-29 19:33:51.110646 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 19:33:51.110651 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 19:33:51.110655 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 19:33:51.110660 | orchestrator | 2025-08-29 19:33:51.110665 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-08-29 19:33:51.110670 | orchestrator | Friday 29 August 2025 19:29:46 +0000 (0:00:02.034) 0:07:26.059 ********* 2025-08-29 19:33:51.110674 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 19:33:51.110683 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 19:33:51.110688 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:33:51.110693 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 19:33:51.110697 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-08-29 19:33:51.110702 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:33:51.110707 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 19:33:51.110712 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-08-29 19:33:51.110716 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:33:51.110721 | orchestrator | 2025-08-29 19:33:51.110726 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-08-29 19:33:51.110731 | orchestrator | Friday 29 August 2025 19:29:48 +0000 (0:00:01.252) 0:07:27.311 ********* 2025-08-29 19:33:51.110736 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 19:33:51.110740 | orchestrator | 2025-08-29 19:33:51.110745 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-08-29 19:33:51.110750 | orchestrator | Friday 29 August 2025 19:29:50 +0000 (0:00:01.937) 0:07:29.249 ********* 2025-08-29 19:33:51.110755 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:33:51.110760 | orchestrator | 2025-08-29 19:33:51.110765 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-08-29 19:33:51.110782 | orchestrator | Friday 29 August 2025 19:29:50 +0000 (0:00:00.579) 0:07:29.829 ********* 2025-08-29 19:33:51.110791 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f946ce78-a8de-59ba-8bf5-045c292b6708', 'data_vg': 'ceph-f946ce78-a8de-59ba-8bf5-045c292b6708'}) 2025-08-29 19:33:51.110796 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-159b9ed4-8d08-5970-86a8-bd63a32380d6', 'data_vg': 'ceph-159b9ed4-8d08-5970-86a8-bd63a32380d6'}) 2025-08-29 19:33:51.110804 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d29334ae-dac4-5c8b-9540-76ee60da5ca1', 'data_vg': 'ceph-d29334ae-dac4-5c8b-9540-76ee60da5ca1'}) 2025-08-29 19:33:51.110810 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-9d878572-29ec-5c6d-9e5c-f341c26bb0e1', 'data_vg': 'ceph-9d878572-29ec-5c6d-9e5c-f341c26bb0e1'}) 2025-08-29 19:33:51.110815 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-338f76e1-8833-5be4-9943-9980bb5050e8', 'data_vg': 'ceph-338f76e1-8833-5be4-9943-9980bb5050e8'}) 2025-08-29 19:33:51.110820 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-916dc454-8beb-55d0-b00a-22c96f7025a6', 'data_vg': 'ceph-916dc454-8beb-55d0-b00a-22c96f7025a6'}) 2025-08-29 19:33:51.110824 | orchestrator | 2025-08-29 19:33:51.110829 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-08-29 19:33:51.110834 | orchestrator | Friday 29 August 2025 19:30:29 +0000 (0:00:39.200) 0:08:09.029 ********* 2025-08-29 19:33:51.110839 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.110844 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.110849 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.110853 | orchestrator | 2025-08-29 19:33:51.110858 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-08-29 19:33:51.110863 | orchestrator | Friday 29 August 2025 19:30:30 +0000 (0:00:00.622) 0:08:09.651 ********* 2025-08-29 19:33:51.110868 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:33:51.110873 | orchestrator | 2025-08-29 19:33:51.110877 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-08-29 19:33:51.110882 | orchestrator | Friday 29 August 2025 19:30:31 +0000 (0:00:00.616) 0:08:10.268 ********* 2025-08-29 19:33:51.110887 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.110892 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.110897 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.110902 | orchestrator | 2025-08-29 19:33:51.110912 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-08-29 19:33:51.110917 | orchestrator | Friday 29 August 2025 19:30:32 +0000 (0:00:01.572) 0:08:11.840 ********* 2025-08-29 19:33:51.110921 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.110926 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.110931 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.110936 | orchestrator | 2025-08-29 19:33:51.110941 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-08-29 19:33:51.110946 | orchestrator | Friday 29 August 2025 19:30:35 +0000 (0:00:02.591) 0:08:14.432 ********* 2025-08-29 19:33:51.110950 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:33:51.110955 | orchestrator | 2025-08-29 19:33:51.110960 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-08-29 19:33:51.110965 | orchestrator | Friday 29 August 2025 19:30:35 +0000 (0:00:00.548) 0:08:14.980 ********* 2025-08-29 19:33:51.110970 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:33:51.110974 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:33:51.110979 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:33:51.110984 | orchestrator | 2025-08-29 19:33:51.110989 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-08-29 19:33:51.110994 | orchestrator | Friday 29 August 2025 19:30:36 +0000 (0:00:01.091) 0:08:16.071 ********* 2025-08-29 19:33:51.110998 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:33:51.111003 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:33:51.111008 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:33:51.111013 | orchestrator | 2025-08-29 19:33:51.111018 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-08-29 19:33:51.111023 | orchestrator | Friday 29 August 2025 19:30:38 +0000 (0:00:01.327) 0:08:17.399 ********* 2025-08-29 19:33:51.111027 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:33:51.111032 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:33:51.111037 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:33:51.111042 | orchestrator | 2025-08-29 19:33:51.111046 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-08-29 19:33:51.111051 | orchestrator | Friday 29 August 2025 19:30:39 +0000 (0:00:01.509) 0:08:18.908 ********* 2025-08-29 19:33:51.111056 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.111060 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.111065 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.111070 | orchestrator | 2025-08-29 19:33:51.111075 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-08-29 19:33:51.111080 | orchestrator | Friday 29 August 2025 19:30:40 +0000 (0:00:00.373) 0:08:19.282 ********* 2025-08-29 19:33:51.111084 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.111089 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.111094 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.111099 | orchestrator | 2025-08-29 19:33:51.111103 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-08-29 19:33:51.111108 | orchestrator | Friday 29 August 2025 19:30:40 +0000 (0:00:00.388) 0:08:19.670 ********* 2025-08-29 19:33:51.111113 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-08-29 19:33:51.111118 | orchestrator | ok: [testbed-node-4] => (item=3) 2025-08-29 19:33:51.111122 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-08-29 19:33:51.111127 | orchestrator | ok: [testbed-node-3] => (item=4) 2025-08-29 19:33:51.111132 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-08-29 19:33:51.111137 | orchestrator | ok: [testbed-node-5] => (item=5) 2025-08-29 19:33:51.111142 | orchestrator | 2025-08-29 19:33:51.111149 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-08-29 19:33:51.111154 | orchestrator | Friday 29 August 2025 19:30:41 +0000 (0:00:01.224) 0:08:20.895 ********* 2025-08-29 19:33:51.111159 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-08-29 19:33:51.111164 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-08-29 19:33:51.111172 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-08-29 19:33:51.111177 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-08-29 19:33:51.111181 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-08-29 19:33:51.111189 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-08-29 19:33:51.111194 | orchestrator | 2025-08-29 19:33:51.111199 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-08-29 19:33:51.111204 | orchestrator | Friday 29 August 2025 19:30:43 +0000 (0:00:02.071) 0:08:22.967 ********* 2025-08-29 19:33:51.111209 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-08-29 19:33:51.111213 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-08-29 19:33:51.111218 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-08-29 19:33:51.111223 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-08-29 19:33:51.111228 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-08-29 19:33:51.111233 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-08-29 19:33:51.111237 | orchestrator | 2025-08-29 19:33:51.111242 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-08-29 19:33:51.111247 | orchestrator | Friday 29 August 2025 19:30:47 +0000 (0:00:03.252) 0:08:26.219 ********* 2025-08-29 19:33:51.111252 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.111256 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.111261 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-08-29 19:33:51.111266 | orchestrator | 2025-08-29 19:33:51.111271 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-08-29 19:33:51.111275 | orchestrator | Friday 29 August 2025 19:30:49 +0000 (0:00:02.288) 0:08:28.508 ********* 2025-08-29 19:33:51.111280 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.111285 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.111290 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-08-29 19:33:51.111295 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-08-29 19:33:51.111300 | orchestrator | 2025-08-29 19:33:51.111305 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-08-29 19:33:51.111309 | orchestrator | Friday 29 August 2025 19:31:02 +0000 (0:00:13.066) 0:08:41.574 ********* 2025-08-29 19:33:51.111314 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.111319 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.111324 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.111328 | orchestrator | 2025-08-29 19:33:51.111333 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 19:33:51.111338 | orchestrator | Friday 29 August 2025 19:31:03 +0000 (0:00:00.839) 0:08:42.413 ********* 2025-08-29 19:33:51.111351 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.111356 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.111361 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.111366 | orchestrator | 2025-08-29 19:33:51.111371 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-08-29 19:33:51.111375 | orchestrator | Friday 29 August 2025 19:31:03 +0000 (0:00:00.568) 0:08:42.982 ********* 2025-08-29 19:33:51.111380 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:33:51.111385 | orchestrator | 2025-08-29 19:33:51.111390 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-08-29 19:33:51.111395 | orchestrator | Friday 29 August 2025 19:31:04 +0000 (0:00:00.560) 0:08:43.542 ********* 2025-08-29 19:33:51.111399 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 19:33:51.111404 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 19:33:51.111409 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 19:33:51.111414 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.111419 | orchestrator | 2025-08-29 19:33:51.111427 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-08-29 19:33:51.111432 | orchestrator | Friday 29 August 2025 19:31:04 +0000 (0:00:00.402) 0:08:43.944 ********* 2025-08-29 19:33:51.111437 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.111441 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.111446 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.111451 | orchestrator | 2025-08-29 19:33:51.111456 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-08-29 19:33:51.111461 | orchestrator | Friday 29 August 2025 19:31:05 +0000 (0:00:00.299) 0:08:44.244 ********* 2025-08-29 19:33:51.111465 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.111470 | orchestrator | 2025-08-29 19:33:51.111475 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-08-29 19:33:51.111480 | orchestrator | Friday 29 August 2025 19:31:05 +0000 (0:00:00.223) 0:08:44.467 ********* 2025-08-29 19:33:51.111485 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.111490 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.111494 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.111499 | orchestrator | 2025-08-29 19:33:51.111504 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-08-29 19:33:51.111509 | orchestrator | Friday 29 August 2025 19:31:05 +0000 (0:00:00.591) 0:08:45.059 ********* 2025-08-29 19:33:51.111514 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.111518 | orchestrator | 2025-08-29 19:33:51.111523 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-08-29 19:33:51.111528 | orchestrator | Friday 29 August 2025 19:31:06 +0000 (0:00:00.234) 0:08:45.294 ********* 2025-08-29 19:33:51.111533 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.111538 | orchestrator | 2025-08-29 19:33:51.111542 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-08-29 19:33:51.111550 | orchestrator | Friday 29 August 2025 19:31:06 +0000 (0:00:00.231) 0:08:45.525 ********* 2025-08-29 19:33:51.111555 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.111560 | orchestrator | 2025-08-29 19:33:51.111565 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-08-29 19:33:51.111570 | orchestrator | Friday 29 August 2025 19:31:06 +0000 (0:00:00.123) 0:08:45.648 ********* 2025-08-29 19:33:51.111575 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.111579 | orchestrator | 2025-08-29 19:33:51.111587 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-08-29 19:33:51.111592 | orchestrator | Friday 29 August 2025 19:31:06 +0000 (0:00:00.260) 0:08:45.909 ********* 2025-08-29 19:33:51.111597 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.111602 | orchestrator | 2025-08-29 19:33:51.111607 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-08-29 19:33:51.111611 | orchestrator | Friday 29 August 2025 19:31:06 +0000 (0:00:00.201) 0:08:46.111 ********* 2025-08-29 19:33:51.111616 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 19:33:51.111621 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 19:33:51.111626 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 19:33:51.111631 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.111636 | orchestrator | 2025-08-29 19:33:51.111641 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-08-29 19:33:51.111646 | orchestrator | Friday 29 August 2025 19:31:07 +0000 (0:00:00.416) 0:08:46.527 ********* 2025-08-29 19:33:51.111650 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.111655 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.111660 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.111665 | orchestrator | 2025-08-29 19:33:51.111670 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-08-29 19:33:51.111674 | orchestrator | Friday 29 August 2025 19:31:07 +0000 (0:00:00.312) 0:08:46.839 ********* 2025-08-29 19:33:51.111679 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.111687 | orchestrator | 2025-08-29 19:33:51.111692 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-08-29 19:33:51.111697 | orchestrator | Friday 29 August 2025 19:31:08 +0000 (0:00:00.812) 0:08:47.652 ********* 2025-08-29 19:33:51.111702 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.111707 | orchestrator | 2025-08-29 19:33:51.111712 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-08-29 19:33:51.111717 | orchestrator | 2025-08-29 19:33:51.111721 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 19:33:51.111726 | orchestrator | Friday 29 August 2025 19:31:09 +0000 (0:00:00.658) 0:08:48.311 ********* 2025-08-29 19:33:51.111731 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:33:51.111737 | orchestrator | 2025-08-29 19:33:51.111742 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 19:33:51.111749 | orchestrator | Friday 29 August 2025 19:31:10 +0000 (0:00:01.356) 0:08:49.667 ********* 2025-08-29 19:33:51.111757 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:33:51.111765 | orchestrator | 2025-08-29 19:33:51.111790 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 19:33:51.111801 | orchestrator | Friday 29 August 2025 19:31:11 +0000 (0:00:01.238) 0:08:50.906 ********* 2025-08-29 19:33:51.111808 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.111816 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.111823 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.111831 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.111838 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.111846 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.111853 | orchestrator | 2025-08-29 19:33:51.111862 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 19:33:51.111870 | orchestrator | Friday 29 August 2025 19:31:12 +0000 (0:00:01.178) 0:08:52.085 ********* 2025-08-29 19:33:51.111878 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.111886 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.111893 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.111898 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.111903 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.111908 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.111912 | orchestrator | 2025-08-29 19:33:51.111917 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 19:33:51.111922 | orchestrator | Friday 29 August 2025 19:31:13 +0000 (0:00:00.667) 0:08:52.752 ********* 2025-08-29 19:33:51.111927 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.111932 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.111937 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.111942 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.111947 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.111951 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.111956 | orchestrator | 2025-08-29 19:33:51.111961 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 19:33:51.111966 | orchestrator | Friday 29 August 2025 19:31:14 +0000 (0:00:00.895) 0:08:53.647 ********* 2025-08-29 19:33:51.111971 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.111976 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.111980 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.111985 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.111990 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.111995 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.112000 | orchestrator | 2025-08-29 19:33:51.112005 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 19:33:51.112015 | orchestrator | Friday 29 August 2025 19:31:15 +0000 (0:00:00.663) 0:08:54.311 ********* 2025-08-29 19:33:51.112020 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.112025 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.112033 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.112038 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.112043 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.112048 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.112053 | orchestrator | 2025-08-29 19:33:51.112057 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 19:33:51.112062 | orchestrator | Friday 29 August 2025 19:31:16 +0000 (0:00:00.990) 0:08:55.301 ********* 2025-08-29 19:33:51.112067 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.112072 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.112082 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.112087 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.112092 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.112097 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.112101 | orchestrator | 2025-08-29 19:33:51.112106 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 19:33:51.112111 | orchestrator | Friday 29 August 2025 19:31:17 +0000 (0:00:00.856) 0:08:56.158 ********* 2025-08-29 19:33:51.112116 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.112121 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.112125 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.112130 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.112135 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.112139 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.112144 | orchestrator | 2025-08-29 19:33:51.112149 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 19:33:51.112154 | orchestrator | Friday 29 August 2025 19:31:17 +0000 (0:00:00.599) 0:08:56.757 ********* 2025-08-29 19:33:51.112158 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.112163 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.112168 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.112172 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.112177 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.112182 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.112187 | orchestrator | 2025-08-29 19:33:51.112192 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 19:33:51.112196 | orchestrator | Friday 29 August 2025 19:31:18 +0000 (0:00:01.227) 0:08:57.984 ********* 2025-08-29 19:33:51.112201 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.112206 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.112210 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.112215 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.112220 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.112224 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.112229 | orchestrator | 2025-08-29 19:33:51.112234 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 19:33:51.112239 | orchestrator | Friday 29 August 2025 19:31:19 +0000 (0:00:01.040) 0:08:59.025 ********* 2025-08-29 19:33:51.112243 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.112248 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.112253 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.112258 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.112263 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.112267 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.112272 | orchestrator | 2025-08-29 19:33:51.112277 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 19:33:51.112282 | orchestrator | Friday 29 August 2025 19:31:20 +0000 (0:00:00.992) 0:09:00.017 ********* 2025-08-29 19:33:51.112287 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.112291 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.112296 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.112305 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.112309 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.112314 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.112319 | orchestrator | 2025-08-29 19:33:51.112324 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 19:33:51.112328 | orchestrator | Friday 29 August 2025 19:31:21 +0000 (0:00:00.610) 0:09:00.628 ********* 2025-08-29 19:33:51.112333 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.112338 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.112343 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.112347 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.112352 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.112357 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.112362 | orchestrator | 2025-08-29 19:33:51.112366 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 19:33:51.112371 | orchestrator | Friday 29 August 2025 19:31:22 +0000 (0:00:00.885) 0:09:01.514 ********* 2025-08-29 19:33:51.112376 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.112381 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.112385 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.112390 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.112395 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.112399 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.112404 | orchestrator | 2025-08-29 19:33:51.112409 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 19:33:51.112413 | orchestrator | Friday 29 August 2025 19:31:22 +0000 (0:00:00.600) 0:09:02.114 ********* 2025-08-29 19:33:51.112418 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.112423 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.112428 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.112432 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.112437 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.112442 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.112446 | orchestrator | 2025-08-29 19:33:51.112451 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 19:33:51.112456 | orchestrator | Friday 29 August 2025 19:31:23 +0000 (0:00:00.858) 0:09:02.973 ********* 2025-08-29 19:33:51.112461 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.112465 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.112470 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.112475 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.112480 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.112484 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.112489 | orchestrator | 2025-08-29 19:33:51.112494 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 19:33:51.112499 | orchestrator | Friday 29 August 2025 19:31:24 +0000 (0:00:00.616) 0:09:03.589 ********* 2025-08-29 19:33:51.112503 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.112511 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.112516 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.112521 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:33:51.112525 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:33:51.112530 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:33:51.112535 | orchestrator | 2025-08-29 19:33:51.112540 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 19:33:51.112544 | orchestrator | Friday 29 August 2025 19:31:25 +0000 (0:00:00.888) 0:09:04.477 ********* 2025-08-29 19:33:51.112549 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.112557 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.112562 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.112566 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.112571 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.112576 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.112581 | orchestrator | 2025-08-29 19:33:51.112589 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 19:33:51.112594 | orchestrator | Friday 29 August 2025 19:31:26 +0000 (0:00:00.679) 0:09:05.156 ********* 2025-08-29 19:33:51.112599 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.112603 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.112608 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.112613 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.112618 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.112622 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.112627 | orchestrator | 2025-08-29 19:33:51.112632 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 19:33:51.112636 | orchestrator | Friday 29 August 2025 19:31:26 +0000 (0:00:00.878) 0:09:06.035 ********* 2025-08-29 19:33:51.112641 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.112646 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.112650 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.112655 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.112660 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.112664 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.112669 | orchestrator | 2025-08-29 19:33:51.112674 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-08-29 19:33:51.112679 | orchestrator | Friday 29 August 2025 19:31:28 +0000 (0:00:01.273) 0:09:07.308 ********* 2025-08-29 19:33:51.112683 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 19:33:51.112688 | orchestrator | 2025-08-29 19:33:51.112693 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-08-29 19:33:51.112698 | orchestrator | Friday 29 August 2025 19:31:31 +0000 (0:00:03.586) 0:09:10.895 ********* 2025-08-29 19:33:51.112703 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 19:33:51.112707 | orchestrator | 2025-08-29 19:33:51.112712 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-08-29 19:33:51.112717 | orchestrator | Friday 29 August 2025 19:31:33 +0000 (0:00:01.902) 0:09:12.797 ********* 2025-08-29 19:33:51.112722 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:33:51.112726 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:33:51.112731 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:33:51.112736 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.112741 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:33:51.112745 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:33:51.112750 | orchestrator | 2025-08-29 19:33:51.112755 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-08-29 19:33:51.112760 | orchestrator | Friday 29 August 2025 19:31:35 +0000 (0:00:01.452) 0:09:14.250 ********* 2025-08-29 19:33:51.112764 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:33:51.112799 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:33:51.112805 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:33:51.112810 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:33:51.112815 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:33:51.112819 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:33:51.112824 | orchestrator | 2025-08-29 19:33:51.112829 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-08-29 19:33:51.112834 | orchestrator | Friday 29 August 2025 19:31:36 +0000 (0:00:01.271) 0:09:15.521 ********* 2025-08-29 19:33:51.112839 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:33:51.112845 | orchestrator | 2025-08-29 19:33:51.112849 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-08-29 19:33:51.112854 | orchestrator | Friday 29 August 2025 19:31:37 +0000 (0:00:01.237) 0:09:16.759 ********* 2025-08-29 19:33:51.112859 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:33:51.112864 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:33:51.112869 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:33:51.112873 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:33:51.112882 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:33:51.112886 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:33:51.112891 | orchestrator | 2025-08-29 19:33:51.112896 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-08-29 19:33:51.112901 | orchestrator | Friday 29 August 2025 19:31:39 +0000 (0:00:01.609) 0:09:18.369 ********* 2025-08-29 19:33:51.112906 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:33:51.112910 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:33:51.112915 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:33:51.112920 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:33:51.112925 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:33:51.112929 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:33:51.112934 | orchestrator | 2025-08-29 19:33:51.112939 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-08-29 19:33:51.112944 | orchestrator | Friday 29 August 2025 19:31:42 +0000 (0:00:03.578) 0:09:21.948 ********* 2025-08-29 19:33:51.112949 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:33:51.112954 | orchestrator | 2025-08-29 19:33:51.112959 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-08-29 19:33:51.112963 | orchestrator | Friday 29 August 2025 19:31:44 +0000 (0:00:01.306) 0:09:23.255 ********* 2025-08-29 19:33:51.112971 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.112976 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.112981 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.112986 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.112991 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.112995 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.113000 | orchestrator | 2025-08-29 19:33:51.113005 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-08-29 19:33:51.113010 | orchestrator | Friday 29 August 2025 19:31:44 +0000 (0:00:00.731) 0:09:23.986 ********* 2025-08-29 19:33:51.113015 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:33:51.113023 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:33:51.113028 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:33:51.113033 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:33:51.113037 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:33:51.113042 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:33:51.113047 | orchestrator | 2025-08-29 19:33:51.113052 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-08-29 19:33:51.113057 | orchestrator | Friday 29 August 2025 19:31:48 +0000 (0:00:03.614) 0:09:27.601 ********* 2025-08-29 19:33:51.113062 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.113066 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.113071 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.113076 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:33:51.113081 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:33:51.113085 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:33:51.113090 | orchestrator | 2025-08-29 19:33:51.113095 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-08-29 19:33:51.113099 | orchestrator | 2025-08-29 19:33:51.113104 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 19:33:51.113109 | orchestrator | Friday 29 August 2025 19:31:49 +0000 (0:00:00.924) 0:09:28.525 ********* 2025-08-29 19:33:51.113114 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:33:51.113119 | orchestrator | 2025-08-29 19:33:51.113124 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 19:33:51.113128 | orchestrator | Friday 29 August 2025 19:31:50 +0000 (0:00:00.860) 0:09:29.385 ********* 2025-08-29 19:33:51.113133 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:33:51.113141 | orchestrator | 2025-08-29 19:33:51.113146 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 19:33:51.113151 | orchestrator | Friday 29 August 2025 19:31:50 +0000 (0:00:00.566) 0:09:29.952 ********* 2025-08-29 19:33:51.113156 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.113161 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.113165 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.113170 | orchestrator | 2025-08-29 19:33:51.113175 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 19:33:51.113180 | orchestrator | Friday 29 August 2025 19:31:51 +0000 (0:00:00.625) 0:09:30.578 ********* 2025-08-29 19:33:51.113185 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.113189 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.113194 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.113199 | orchestrator | 2025-08-29 19:33:51.113204 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 19:33:51.113209 | orchestrator | Friday 29 August 2025 19:31:52 +0000 (0:00:00.801) 0:09:31.379 ********* 2025-08-29 19:33:51.113213 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.113218 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.113223 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.113228 | orchestrator | 2025-08-29 19:33:51.113233 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 19:33:51.113237 | orchestrator | Friday 29 August 2025 19:31:52 +0000 (0:00:00.747) 0:09:32.127 ********* 2025-08-29 19:33:51.113242 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.113247 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.113252 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.113256 | orchestrator | 2025-08-29 19:33:51.113261 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 19:33:51.113266 | orchestrator | Friday 29 August 2025 19:31:53 +0000 (0:00:00.706) 0:09:32.834 ********* 2025-08-29 19:33:51.113271 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.113276 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.113281 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.113285 | orchestrator | 2025-08-29 19:33:51.113290 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 19:33:51.113295 | orchestrator | Friday 29 August 2025 19:31:54 +0000 (0:00:00.581) 0:09:33.415 ********* 2025-08-29 19:33:51.113300 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.113304 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.113309 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.113314 | orchestrator | 2025-08-29 19:33:51.113319 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 19:33:51.113323 | orchestrator | Friday 29 August 2025 19:31:54 +0000 (0:00:00.321) 0:09:33.737 ********* 2025-08-29 19:33:51.113328 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.113333 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.113338 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.113342 | orchestrator | 2025-08-29 19:33:51.113347 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 19:33:51.113352 | orchestrator | Friday 29 August 2025 19:31:54 +0000 (0:00:00.319) 0:09:34.056 ********* 2025-08-29 19:33:51.113357 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.113361 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.113366 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.113370 | orchestrator | 2025-08-29 19:33:51.113375 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 19:33:51.113379 | orchestrator | Friday 29 August 2025 19:31:55 +0000 (0:00:00.803) 0:09:34.860 ********* 2025-08-29 19:33:51.113384 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.113388 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.113393 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.113397 | orchestrator | 2025-08-29 19:33:51.113402 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 19:33:51.113414 | orchestrator | Friday 29 August 2025 19:31:56 +0000 (0:00:01.142) 0:09:36.003 ********* 2025-08-29 19:33:51.113419 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.113423 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.113428 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.113432 | orchestrator | 2025-08-29 19:33:51.113437 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 19:33:51.113442 | orchestrator | Friday 29 August 2025 19:31:57 +0000 (0:00:00.315) 0:09:36.319 ********* 2025-08-29 19:33:51.113446 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.113454 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.113458 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.113463 | orchestrator | 2025-08-29 19:33:51.113468 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 19:33:51.113472 | orchestrator | Friday 29 August 2025 19:31:57 +0000 (0:00:00.318) 0:09:36.637 ********* 2025-08-29 19:33:51.113477 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.113481 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.113486 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.113490 | orchestrator | 2025-08-29 19:33:51.113495 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 19:33:51.113499 | orchestrator | Friday 29 August 2025 19:31:57 +0000 (0:00:00.358) 0:09:36.995 ********* 2025-08-29 19:33:51.113504 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.113508 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.113513 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.113517 | orchestrator | 2025-08-29 19:33:51.113522 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 19:33:51.113526 | orchestrator | Friday 29 August 2025 19:31:58 +0000 (0:00:00.598) 0:09:37.593 ********* 2025-08-29 19:33:51.113531 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.113535 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.113540 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.113544 | orchestrator | 2025-08-29 19:33:51.113549 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 19:33:51.113553 | orchestrator | Friday 29 August 2025 19:31:58 +0000 (0:00:00.325) 0:09:37.919 ********* 2025-08-29 19:33:51.113558 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.113562 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.113567 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.113571 | orchestrator | 2025-08-29 19:33:51.113576 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 19:33:51.113580 | orchestrator | Friday 29 August 2025 19:31:59 +0000 (0:00:00.330) 0:09:38.249 ********* 2025-08-29 19:33:51.113585 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.113589 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.113594 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.113598 | orchestrator | 2025-08-29 19:33:51.113603 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 19:33:51.113608 | orchestrator | Friday 29 August 2025 19:31:59 +0000 (0:00:00.336) 0:09:38.586 ********* 2025-08-29 19:33:51.113612 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.113617 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.113621 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.113626 | orchestrator | 2025-08-29 19:33:51.113630 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 19:33:51.113635 | orchestrator | Friday 29 August 2025 19:32:00 +0000 (0:00:00.602) 0:09:39.188 ********* 2025-08-29 19:33:51.113639 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.113644 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.113648 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.113653 | orchestrator | 2025-08-29 19:33:51.113657 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 19:33:51.113662 | orchestrator | Friday 29 August 2025 19:32:00 +0000 (0:00:00.395) 0:09:39.584 ********* 2025-08-29 19:33:51.113671 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.113676 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.113680 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.113685 | orchestrator | 2025-08-29 19:33:51.113689 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-08-29 19:33:51.113694 | orchestrator | Friday 29 August 2025 19:32:01 +0000 (0:00:00.699) 0:09:40.283 ********* 2025-08-29 19:33:51.113698 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.113703 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.113707 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-08-29 19:33:51.113712 | orchestrator | 2025-08-29 19:33:51.113717 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-08-29 19:33:51.113721 | orchestrator | Friday 29 August 2025 19:32:02 +0000 (0:00:00.868) 0:09:41.152 ********* 2025-08-29 19:33:51.113726 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 19:33:51.113730 | orchestrator | 2025-08-29 19:33:51.113735 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-08-29 19:33:51.113739 | orchestrator | Friday 29 August 2025 19:32:04 +0000 (0:00:02.581) 0:09:43.733 ********* 2025-08-29 19:33:51.113745 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-08-29 19:33:51.113752 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.113756 | orchestrator | 2025-08-29 19:33:51.113761 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-08-29 19:33:51.113765 | orchestrator | Friday 29 August 2025 19:32:04 +0000 (0:00:00.314) 0:09:44.047 ********* 2025-08-29 19:33:51.113783 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 19:33:51.113797 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 19:33:51.113801 | orchestrator | 2025-08-29 19:33:51.113806 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-08-29 19:33:51.113811 | orchestrator | Friday 29 August 2025 19:32:12 +0000 (0:00:07.835) 0:09:51.882 ********* 2025-08-29 19:33:51.113818 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 19:33:51.113823 | orchestrator | 2025-08-29 19:33:51.113827 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-08-29 19:33:51.113832 | orchestrator | Friday 29 August 2025 19:32:16 +0000 (0:00:03.694) 0:09:55.577 ********* 2025-08-29 19:33:51.113836 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:33:51.113841 | orchestrator | 2025-08-29 19:33:51.113846 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-08-29 19:33:51.113850 | orchestrator | Friday 29 August 2025 19:32:17 +0000 (0:00:00.834) 0:09:56.411 ********* 2025-08-29 19:33:51.113855 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-08-29 19:33:51.113859 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-08-29 19:33:51.113864 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-08-29 19:33:51.113868 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-08-29 19:33:51.113873 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-08-29 19:33:51.113877 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-08-29 19:33:51.113885 | orchestrator | 2025-08-29 19:33:51.113890 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-08-29 19:33:51.113894 | orchestrator | Friday 29 August 2025 19:32:18 +0000 (0:00:01.043) 0:09:57.454 ********* 2025-08-29 19:33:51.113899 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 19:33:51.113903 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 19:33:51.113908 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 19:33:51.113913 | orchestrator | 2025-08-29 19:33:51.113917 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-08-29 19:33:51.113922 | orchestrator | Friday 29 August 2025 19:32:20 +0000 (0:00:02.229) 0:09:59.684 ********* 2025-08-29 19:33:51.113926 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 19:33:51.113931 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 19:33:51.113935 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:33:51.113940 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 19:33:51.113944 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-08-29 19:33:51.113949 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:33:51.113953 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 19:33:51.113958 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-08-29 19:33:51.113962 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:33:51.113967 | orchestrator | 2025-08-29 19:33:51.113972 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-08-29 19:33:51.113976 | orchestrator | Friday 29 August 2025 19:32:21 +0000 (0:00:01.192) 0:10:00.876 ********* 2025-08-29 19:33:51.113981 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:33:51.113985 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:33:51.113990 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:33:51.113994 | orchestrator | 2025-08-29 19:33:51.113999 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-08-29 19:33:51.114003 | orchestrator | Friday 29 August 2025 19:32:24 +0000 (0:00:02.728) 0:10:03.604 ********* 2025-08-29 19:33:51.114008 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.114033 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.114039 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.114043 | orchestrator | 2025-08-29 19:33:51.114048 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-08-29 19:33:51.114052 | orchestrator | Friday 29 August 2025 19:32:25 +0000 (0:00:00.637) 0:10:04.242 ********* 2025-08-29 19:33:51.114057 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:33:51.114062 | orchestrator | 2025-08-29 19:33:51.114066 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-08-29 19:33:51.114070 | orchestrator | Friday 29 August 2025 19:32:25 +0000 (0:00:00.596) 0:10:04.838 ********* 2025-08-29 19:33:51.114075 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:33:51.114080 | orchestrator | 2025-08-29 19:33:51.114084 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-08-29 19:33:51.114089 | orchestrator | Friday 29 August 2025 19:32:26 +0000 (0:00:00.765) 0:10:05.604 ********* 2025-08-29 19:33:51.114093 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:33:51.114097 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:33:51.114102 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:33:51.114106 | orchestrator | 2025-08-29 19:33:51.114111 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-08-29 19:33:51.114116 | orchestrator | Friday 29 August 2025 19:32:27 +0000 (0:00:01.258) 0:10:06.862 ********* 2025-08-29 19:33:51.114120 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:33:51.114124 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:33:51.114129 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:33:51.114137 | orchestrator | 2025-08-29 19:33:51.114141 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-08-29 19:33:51.114149 | orchestrator | Friday 29 August 2025 19:32:28 +0000 (0:00:01.143) 0:10:08.006 ********* 2025-08-29 19:33:51.114154 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:33:51.114158 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:33:51.114162 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:33:51.114167 | orchestrator | 2025-08-29 19:33:51.114171 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-08-29 19:33:51.114176 | orchestrator | Friday 29 August 2025 19:32:30 +0000 (0:00:01.631) 0:10:09.637 ********* 2025-08-29 19:33:51.114181 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:33:51.114188 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:33:51.114193 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:33:51.114198 | orchestrator | 2025-08-29 19:33:51.114202 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-08-29 19:33:51.114207 | orchestrator | Friday 29 August 2025 19:32:32 +0000 (0:00:02.248) 0:10:11.885 ********* 2025-08-29 19:33:51.114211 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.114216 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.114220 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.114225 | orchestrator | 2025-08-29 19:33:51.114229 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 19:33:51.114234 | orchestrator | Friday 29 August 2025 19:32:33 +0000 (0:00:01.241) 0:10:13.127 ********* 2025-08-29 19:33:51.114238 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:33:51.114243 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:33:51.114248 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:33:51.114252 | orchestrator | 2025-08-29 19:33:51.114257 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-08-29 19:33:51.114261 | orchestrator | Friday 29 August 2025 19:32:34 +0000 (0:00:00.929) 0:10:14.057 ********* 2025-08-29 19:33:51.114266 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:33:51.114270 | orchestrator | 2025-08-29 19:33:51.114275 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-08-29 19:33:51.114279 | orchestrator | Friday 29 August 2025 19:32:35 +0000 (0:00:00.555) 0:10:14.612 ********* 2025-08-29 19:33:51.114284 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.114288 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.114293 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.114297 | orchestrator | 2025-08-29 19:33:51.114302 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-08-29 19:33:51.114306 | orchestrator | Friday 29 August 2025 19:32:35 +0000 (0:00:00.326) 0:10:14.939 ********* 2025-08-29 19:33:51.114311 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:33:51.114315 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:33:51.114320 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:33:51.114324 | orchestrator | 2025-08-29 19:33:51.114329 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-08-29 19:33:51.114334 | orchestrator | Friday 29 August 2025 19:32:37 +0000 (0:00:01.492) 0:10:16.431 ********* 2025-08-29 19:33:51.114338 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 19:33:51.114343 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 19:33:51.114347 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 19:33:51.114352 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.114356 | orchestrator | 2025-08-29 19:33:51.114361 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-08-29 19:33:51.114365 | orchestrator | Friday 29 August 2025 19:32:37 +0000 (0:00:00.639) 0:10:17.071 ********* 2025-08-29 19:33:51.114370 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.114374 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.114379 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.114386 | orchestrator | 2025-08-29 19:33:51.114391 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-08-29 19:33:51.114396 | orchestrator | 2025-08-29 19:33:51.114400 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 19:33:51.114405 | orchestrator | Friday 29 August 2025 19:32:38 +0000 (0:00:00.550) 0:10:17.621 ********* 2025-08-29 19:33:51.114409 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:33:51.114414 | orchestrator | 2025-08-29 19:33:51.114418 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 19:33:51.114423 | orchestrator | Friday 29 August 2025 19:32:39 +0000 (0:00:00.730) 0:10:18.352 ********* 2025-08-29 19:33:51.114427 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:33:51.114432 | orchestrator | 2025-08-29 19:33:51.114436 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 19:33:51.114441 | orchestrator | Friday 29 August 2025 19:32:39 +0000 (0:00:00.641) 0:10:18.993 ********* 2025-08-29 19:33:51.114445 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.114450 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.114454 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.114459 | orchestrator | 2025-08-29 19:33:51.114463 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 19:33:51.114468 | orchestrator | Friday 29 August 2025 19:32:40 +0000 (0:00:00.546) 0:10:19.540 ********* 2025-08-29 19:33:51.114472 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.114477 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.114481 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.114486 | orchestrator | 2025-08-29 19:33:51.114490 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 19:33:51.114495 | orchestrator | Friday 29 August 2025 19:32:41 +0000 (0:00:00.715) 0:10:20.255 ********* 2025-08-29 19:33:51.114499 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.114504 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.114508 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.114513 | orchestrator | 2025-08-29 19:33:51.114517 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 19:33:51.114522 | orchestrator | Friday 29 August 2025 19:32:41 +0000 (0:00:00.698) 0:10:20.953 ********* 2025-08-29 19:33:51.114526 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.114533 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.114538 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.114542 | orchestrator | 2025-08-29 19:33:51.114547 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 19:33:51.114551 | orchestrator | Friday 29 August 2025 19:32:42 +0000 (0:00:00.713) 0:10:21.667 ********* 2025-08-29 19:33:51.114556 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.114561 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.114565 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.114570 | orchestrator | 2025-08-29 19:33:51.114577 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 19:33:51.114581 | orchestrator | Friday 29 August 2025 19:32:43 +0000 (0:00:00.547) 0:10:22.214 ********* 2025-08-29 19:33:51.114586 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.114590 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.114595 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.114599 | orchestrator | 2025-08-29 19:33:51.114604 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 19:33:51.114609 | orchestrator | Friday 29 August 2025 19:32:43 +0000 (0:00:00.300) 0:10:22.515 ********* 2025-08-29 19:33:51.114613 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.114617 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.114622 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.114629 | orchestrator | 2025-08-29 19:33:51.114634 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 19:33:51.114639 | orchestrator | Friday 29 August 2025 19:32:43 +0000 (0:00:00.314) 0:10:22.830 ********* 2025-08-29 19:33:51.114643 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.114648 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.114652 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.114657 | orchestrator | 2025-08-29 19:33:51.114661 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 19:33:51.114666 | orchestrator | Friday 29 August 2025 19:32:44 +0000 (0:00:00.729) 0:10:23.559 ********* 2025-08-29 19:33:51.114670 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.114674 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.114679 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.114683 | orchestrator | 2025-08-29 19:33:51.114688 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 19:33:51.114692 | orchestrator | Friday 29 August 2025 19:32:45 +0000 (0:00:01.002) 0:10:24.562 ********* 2025-08-29 19:33:51.114697 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.114701 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.114706 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.114711 | orchestrator | 2025-08-29 19:33:51.114715 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 19:33:51.114720 | orchestrator | Friday 29 August 2025 19:32:45 +0000 (0:00:00.319) 0:10:24.881 ********* 2025-08-29 19:33:51.114724 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.114729 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.114733 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.114738 | orchestrator | 2025-08-29 19:33:51.114742 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 19:33:51.114747 | orchestrator | Friday 29 August 2025 19:32:46 +0000 (0:00:00.304) 0:10:25.185 ********* 2025-08-29 19:33:51.114751 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.114756 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.114760 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.114765 | orchestrator | 2025-08-29 19:33:51.114781 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 19:33:51.114786 | orchestrator | Friday 29 August 2025 19:32:46 +0000 (0:00:00.334) 0:10:25.520 ********* 2025-08-29 19:33:51.114791 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.114795 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.114800 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.114804 | orchestrator | 2025-08-29 19:33:51.114809 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 19:33:51.114814 | orchestrator | Friday 29 August 2025 19:32:46 +0000 (0:00:00.584) 0:10:26.104 ********* 2025-08-29 19:33:51.114818 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.114823 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.114827 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.114832 | orchestrator | 2025-08-29 19:33:51.114836 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 19:33:51.114841 | orchestrator | Friday 29 August 2025 19:32:47 +0000 (0:00:00.357) 0:10:26.462 ********* 2025-08-29 19:33:51.114846 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.114850 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.114855 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.114859 | orchestrator | 2025-08-29 19:33:51.114864 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 19:33:51.114869 | orchestrator | Friday 29 August 2025 19:32:47 +0000 (0:00:00.323) 0:10:26.786 ********* 2025-08-29 19:33:51.114873 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.114878 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.114882 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.114887 | orchestrator | 2025-08-29 19:33:51.114891 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 19:33:51.114899 | orchestrator | Friday 29 August 2025 19:32:47 +0000 (0:00:00.301) 0:10:27.088 ********* 2025-08-29 19:33:51.114904 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.114908 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.114913 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.114918 | orchestrator | 2025-08-29 19:33:51.114922 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 19:33:51.114927 | orchestrator | Friday 29 August 2025 19:32:48 +0000 (0:00:00.527) 0:10:27.615 ********* 2025-08-29 19:33:51.114932 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.114936 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.114941 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.114945 | orchestrator | 2025-08-29 19:33:51.114950 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 19:33:51.114954 | orchestrator | Friday 29 August 2025 19:32:48 +0000 (0:00:00.343) 0:10:27.958 ********* 2025-08-29 19:33:51.114959 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.114964 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.114968 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.114972 | orchestrator | 2025-08-29 19:33:51.114980 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-08-29 19:33:51.114984 | orchestrator | Friday 29 August 2025 19:32:49 +0000 (0:00:00.555) 0:10:28.513 ********* 2025-08-29 19:33:51.114989 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:33:51.114994 | orchestrator | 2025-08-29 19:33:51.114998 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-08-29 19:33:51.115007 | orchestrator | Friday 29 August 2025 19:32:50 +0000 (0:00:00.774) 0:10:29.288 ********* 2025-08-29 19:33:51.115011 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 19:33:51.115016 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 19:33:51.115021 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 19:33:51.115025 | orchestrator | 2025-08-29 19:33:51.115030 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-08-29 19:33:51.115034 | orchestrator | Friday 29 August 2025 19:32:52 +0000 (0:00:02.134) 0:10:31.423 ********* 2025-08-29 19:33:51.115039 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 19:33:51.115043 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 19:33:51.115048 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:33:51.115052 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 19:33:51.115057 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-08-29 19:33:51.115061 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:33:51.115066 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 19:33:51.115071 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-08-29 19:33:51.115075 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:33:51.115080 | orchestrator | 2025-08-29 19:33:51.115084 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-08-29 19:33:51.115089 | orchestrator | Friday 29 August 2025 19:32:53 +0000 (0:00:01.174) 0:10:32.597 ********* 2025-08-29 19:33:51.115093 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.115098 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.115102 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.115107 | orchestrator | 2025-08-29 19:33:51.115112 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-08-29 19:33:51.115116 | orchestrator | Friday 29 August 2025 19:32:53 +0000 (0:00:00.330) 0:10:32.927 ********* 2025-08-29 19:33:51.115121 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:33:51.115126 | orchestrator | 2025-08-29 19:33:51.115130 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-08-29 19:33:51.115135 | orchestrator | Friday 29 August 2025 19:32:54 +0000 (0:00:00.841) 0:10:33.769 ********* 2025-08-29 19:33:51.115142 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-08-29 19:33:51.115147 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-08-29 19:33:51.115152 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-08-29 19:33:51.115156 | orchestrator | 2025-08-29 19:33:51.115161 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-08-29 19:33:51.115165 | orchestrator | Friday 29 August 2025 19:32:55 +0000 (0:00:00.906) 0:10:34.675 ********* 2025-08-29 19:33:51.115170 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 19:33:51.115175 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-08-29 19:33:51.115179 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 19:33:51.115184 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-08-29 19:33:51.115188 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 19:33:51.115193 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-08-29 19:33:51.115197 | orchestrator | 2025-08-29 19:33:51.115202 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-08-29 19:33:51.115207 | orchestrator | Friday 29 August 2025 19:32:59 +0000 (0:00:04.225) 0:10:38.901 ********* 2025-08-29 19:33:51.115211 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 19:33:51.115216 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 19:33:51.115220 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 19:33:51.115225 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 19:33:51.115229 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 19:33:51.115234 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 19:33:51.115238 | orchestrator | 2025-08-29 19:33:51.115243 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-08-29 19:33:51.115248 | orchestrator | Friday 29 August 2025 19:33:03 +0000 (0:00:03.364) 0:10:42.265 ********* 2025-08-29 19:33:51.115255 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 19:33:51.115259 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:33:51.115264 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 19:33:51.115268 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:33:51.115273 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 19:33:51.115277 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:33:51.115282 | orchestrator | 2025-08-29 19:33:51.115287 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-08-29 19:33:51.115294 | orchestrator | Friday 29 August 2025 19:33:04 +0000 (0:00:01.374) 0:10:43.640 ********* 2025-08-29 19:33:51.115298 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-08-29 19:33:51.115303 | orchestrator | 2025-08-29 19:33:51.115308 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-08-29 19:33:51.115312 | orchestrator | Friday 29 August 2025 19:33:04 +0000 (0:00:00.243) 0:10:43.883 ********* 2025-08-29 19:33:51.115317 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 19:33:51.115325 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 19:33:51.115330 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 19:33:51.115334 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 19:33:51.115339 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 19:33:51.115344 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.115348 | orchestrator | 2025-08-29 19:33:51.115353 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-08-29 19:33:51.115357 | orchestrator | Friday 29 August 2025 19:33:05 +0000 (0:00:00.657) 0:10:44.541 ********* 2025-08-29 19:33:51.115362 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 19:33:51.115367 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 19:33:51.115371 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 19:33:51.115376 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 19:33:51.115381 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 19:33:51.115385 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.115390 | orchestrator | 2025-08-29 19:33:51.115394 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-08-29 19:33:51.115399 | orchestrator | Friday 29 August 2025 19:33:06 +0000 (0:00:00.604) 0:10:45.146 ********* 2025-08-29 19:33:51.115404 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-08-29 19:33:51.115408 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-08-29 19:33:51.115413 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-08-29 19:33:51.115417 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-08-29 19:33:51.115422 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-08-29 19:33:51.115427 | orchestrator | 2025-08-29 19:33:51.115431 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-08-29 19:33:51.115436 | orchestrator | Friday 29 August 2025 19:33:37 +0000 (0:00:31.707) 0:11:16.853 ********* 2025-08-29 19:33:51.115441 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.115445 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.115450 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.115454 | orchestrator | 2025-08-29 19:33:51.115459 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-08-29 19:33:51.115463 | orchestrator | Friday 29 August 2025 19:33:38 +0000 (0:00:00.329) 0:11:17.183 ********* 2025-08-29 19:33:51.115468 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.115473 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.115477 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.115482 | orchestrator | 2025-08-29 19:33:51.115486 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-08-29 19:33:51.115494 | orchestrator | Friday 29 August 2025 19:33:38 +0000 (0:00:00.601) 0:11:17.785 ********* 2025-08-29 19:33:51.115502 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:33:51.115507 | orchestrator | 2025-08-29 19:33:51.115512 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-08-29 19:33:51.115516 | orchestrator | Friday 29 August 2025 19:33:39 +0000 (0:00:00.549) 0:11:18.334 ********* 2025-08-29 19:33:51.115521 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:33:51.115526 | orchestrator | 2025-08-29 19:33:51.115533 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-08-29 19:33:51.115537 | orchestrator | Friday 29 August 2025 19:33:39 +0000 (0:00:00.768) 0:11:19.103 ********* 2025-08-29 19:33:51.115542 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:33:51.115546 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:33:51.115551 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:33:51.115555 | orchestrator | 2025-08-29 19:33:51.115560 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-08-29 19:33:51.115564 | orchestrator | Friday 29 August 2025 19:33:41 +0000 (0:00:01.275) 0:11:20.378 ********* 2025-08-29 19:33:51.115569 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:33:51.115573 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:33:51.115578 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:33:51.115582 | orchestrator | 2025-08-29 19:33:51.115587 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-08-29 19:33:51.115591 | orchestrator | Friday 29 August 2025 19:33:42 +0000 (0:00:01.139) 0:11:21.518 ********* 2025-08-29 19:33:51.115596 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:33:51.115600 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:33:51.115605 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:33:51.115609 | orchestrator | 2025-08-29 19:33:51.115614 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-08-29 19:33:51.115618 | orchestrator | Friday 29 August 2025 19:33:44 +0000 (0:00:01.696) 0:11:23.214 ********* 2025-08-29 19:33:51.115623 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-08-29 19:33:51.115627 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-08-29 19:33:51.115632 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-08-29 19:33:51.115636 | orchestrator | 2025-08-29 19:33:51.115641 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 19:33:51.115645 | orchestrator | Friday 29 August 2025 19:33:46 +0000 (0:00:02.576) 0:11:25.791 ********* 2025-08-29 19:33:51.115650 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.115654 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.115659 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.115663 | orchestrator | 2025-08-29 19:33:51.115668 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-08-29 19:33:51.115672 | orchestrator | Friday 29 August 2025 19:33:47 +0000 (0:00:00.376) 0:11:26.168 ********* 2025-08-29 19:33:51.115677 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:33:51.115681 | orchestrator | 2025-08-29 19:33:51.115686 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-08-29 19:33:51.115690 | orchestrator | Friday 29 August 2025 19:33:47 +0000 (0:00:00.942) 0:11:27.111 ********* 2025-08-29 19:33:51.115695 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.115699 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.115704 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.115711 | orchestrator | 2025-08-29 19:33:51.115716 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-08-29 19:33:51.115721 | orchestrator | Friday 29 August 2025 19:33:48 +0000 (0:00:00.328) 0:11:27.439 ********* 2025-08-29 19:33:51.115725 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.115730 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:33:51.115734 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:33:51.115738 | orchestrator | 2025-08-29 19:33:51.115743 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-08-29 19:33:51.115747 | orchestrator | Friday 29 August 2025 19:33:48 +0000 (0:00:00.330) 0:11:27.770 ********* 2025-08-29 19:33:51.115752 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 19:33:51.115756 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 19:33:51.115761 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 19:33:51.115765 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:33:51.115793 | orchestrator | 2025-08-29 19:33:51.115798 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-08-29 19:33:51.115802 | orchestrator | Friday 29 August 2025 19:33:49 +0000 (0:00:01.127) 0:11:28.897 ********* 2025-08-29 19:33:51.115807 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:33:51.115811 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:33:51.115816 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:33:51.115820 | orchestrator | 2025-08-29 19:33:51.115825 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:33:51.115830 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2025-08-29 19:33:51.115834 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-08-29 19:33:51.115839 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-08-29 19:33:51.115846 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2025-08-29 19:33:51.115851 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-08-29 19:33:51.115858 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-08-29 19:33:51.115863 | orchestrator | 2025-08-29 19:33:51.115867 | orchestrator | 2025-08-29 19:33:51.115872 | orchestrator | 2025-08-29 19:33:51.115876 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:33:51.115881 | orchestrator | Friday 29 August 2025 19:33:50 +0000 (0:00:00.271) 0:11:29.168 ********* 2025-08-29 19:33:51.115886 | orchestrator | =============================================================================== 2025-08-29 19:33:51.115890 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 76.80s 2025-08-29 19:33:51.115894 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 39.20s 2025-08-29 19:33:51.115899 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.71s 2025-08-29 19:33:51.115903 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.88s 2025-08-29 19:33:51.115908 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.01s 2025-08-29 19:33:51.115912 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.63s 2025-08-29 19:33:51.115917 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 13.07s 2025-08-29 19:33:51.115921 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.69s 2025-08-29 19:33:51.115929 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.38s 2025-08-29 19:33:51.115934 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 7.84s 2025-08-29 19:33:51.115939 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.51s 2025-08-29 19:33:51.115943 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.50s 2025-08-29 19:33:51.115947 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.69s 2025-08-29 19:33:51.115952 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.23s 2025-08-29 19:33:51.115956 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.90s 2025-08-29 19:33:51.115961 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.84s 2025-08-29 19:33:51.115965 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.69s 2025-08-29 19:33:51.115970 | orchestrator | ceph-handler : Restart the ceph-crash service --------------------------- 3.61s 2025-08-29 19:33:51.115974 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.59s 2025-08-29 19:33:51.115979 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.58s 2025-08-29 19:33:54.137623 | orchestrator | 2025-08-29 19:33:54 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:33:54.139587 | orchestrator | 2025-08-29 19:33:54 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:33:54.141745 | orchestrator | 2025-08-29 19:33:54 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:33:54.142069 | orchestrator | 2025-08-29 19:33:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:33:57.202629 | orchestrator | 2025-08-29 19:33:57 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:33:57.204173 | orchestrator | 2025-08-29 19:33:57 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:33:57.206280 | orchestrator | 2025-08-29 19:33:57 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:33:57.206785 | orchestrator | 2025-08-29 19:33:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:34:00.257213 | orchestrator | 2025-08-29 19:34:00 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:34:00.259376 | orchestrator | 2025-08-29 19:34:00 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:34:00.262289 | orchestrator | 2025-08-29 19:34:00 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:34:00.262353 | orchestrator | 2025-08-29 19:34:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:34:03.314560 | orchestrator | 2025-08-29 19:34:03 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:34:03.316961 | orchestrator | 2025-08-29 19:34:03 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:34:03.318715 | orchestrator | 2025-08-29 19:34:03 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:34:03.318784 | orchestrator | 2025-08-29 19:34:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:34:06.361258 | orchestrator | 2025-08-29 19:34:06 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:34:06.363280 | orchestrator | 2025-08-29 19:34:06 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:34:06.366200 | orchestrator | 2025-08-29 19:34:06 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:34:06.366272 | orchestrator | 2025-08-29 19:34:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:34:09.414001 | orchestrator | 2025-08-29 19:34:09 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:34:09.415409 | orchestrator | 2025-08-29 19:34:09 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:34:09.417212 | orchestrator | 2025-08-29 19:34:09 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:34:09.417271 | orchestrator | 2025-08-29 19:34:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:34:12.459655 | orchestrator | 2025-08-29 19:34:12 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:34:12.460076 | orchestrator | 2025-08-29 19:34:12 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:34:12.460672 | orchestrator | 2025-08-29 19:34:12 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:34:12.461397 | orchestrator | 2025-08-29 19:34:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:34:15.496273 | orchestrator | 2025-08-29 19:34:15 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:34:15.496962 | orchestrator | 2025-08-29 19:34:15 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:34:15.498653 | orchestrator | 2025-08-29 19:34:15 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:34:15.498874 | orchestrator | 2025-08-29 19:34:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:34:18.537076 | orchestrator | 2025-08-29 19:34:18 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:34:18.537175 | orchestrator | 2025-08-29 19:34:18 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:34:18.539106 | orchestrator | 2025-08-29 19:34:18 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:34:18.539139 | orchestrator | 2025-08-29 19:34:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:34:21.584209 | orchestrator | 2025-08-29 19:34:21 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:34:21.585599 | orchestrator | 2025-08-29 19:34:21 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:34:21.587341 | orchestrator | 2025-08-29 19:34:21 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:34:21.587380 | orchestrator | 2025-08-29 19:34:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:34:24.642716 | orchestrator | 2025-08-29 19:34:24 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:34:24.642866 | orchestrator | 2025-08-29 19:34:24 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:34:24.642893 | orchestrator | 2025-08-29 19:34:24 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:34:24.642904 | orchestrator | 2025-08-29 19:34:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:34:27.683616 | orchestrator | 2025-08-29 19:34:27 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:34:27.686975 | orchestrator | 2025-08-29 19:34:27 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:34:27.690995 | orchestrator | 2025-08-29 19:34:27 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:34:27.691092 | orchestrator | 2025-08-29 19:34:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:34:30.731166 | orchestrator | 2025-08-29 19:34:30 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:34:30.731813 | orchestrator | 2025-08-29 19:34:30 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:34:30.735433 | orchestrator | 2025-08-29 19:34:30 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:34:30.735728 | orchestrator | 2025-08-29 19:34:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:34:33.774388 | orchestrator | 2025-08-29 19:34:33 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:34:33.776623 | orchestrator | 2025-08-29 19:34:33 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:34:33.779692 | orchestrator | 2025-08-29 19:34:33 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:34:33.780080 | orchestrator | 2025-08-29 19:34:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:34:36.820211 | orchestrator | 2025-08-29 19:34:36 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:34:36.821431 | orchestrator | 2025-08-29 19:34:36 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:34:36.823305 | orchestrator | 2025-08-29 19:34:36 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:34:36.823372 | orchestrator | 2025-08-29 19:34:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:34:39.869576 | orchestrator | 2025-08-29 19:34:39 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:34:39.871990 | orchestrator | 2025-08-29 19:34:39 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:34:39.874141 | orchestrator | 2025-08-29 19:34:39 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:34:39.874179 | orchestrator | 2025-08-29 19:34:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:34:42.915660 | orchestrator | 2025-08-29 19:34:42 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:34:42.916657 | orchestrator | 2025-08-29 19:34:42 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:34:42.919501 | orchestrator | 2025-08-29 19:34:42 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:34:42.919573 | orchestrator | 2025-08-29 19:34:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:34:45.963435 | orchestrator | 2025-08-29 19:34:45 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:34:45.966303 | orchestrator | 2025-08-29 19:34:45 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:34:45.967598 | orchestrator | 2025-08-29 19:34:45 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:34:45.967633 | orchestrator | 2025-08-29 19:34:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:34:49.013703 | orchestrator | 2025-08-29 19:34:49 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:34:49.014880 | orchestrator | 2025-08-29 19:34:49 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state STARTED 2025-08-29 19:34:49.017168 | orchestrator | 2025-08-29 19:34:49 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state STARTED 2025-08-29 19:34:49.017709 | orchestrator | 2025-08-29 19:34:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:34:52.059188 | orchestrator | 2025-08-29 19:34:52 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:34:52.059542 | orchestrator | 2025-08-29 19:34:52 | INFO  | Task 93b117a1-da30-470e-a203-0699ac9492a3 is in state SUCCESS 2025-08-29 19:34:52.060889 | orchestrator | 2025-08-29 19:34:52.060924 | orchestrator | 2025-08-29 19:34:52.060933 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 19:34:52.060941 | orchestrator | 2025-08-29 19:34:52.060949 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 19:34:52.060957 | orchestrator | Friday 29 August 2025 19:31:37 +0000 (0:00:00.268) 0:00:00.268 ********* 2025-08-29 19:34:52.060964 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:34:52.060973 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:34:52.060980 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:34:52.060987 | orchestrator | 2025-08-29 19:34:52.060994 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 19:34:52.061002 | orchestrator | Friday 29 August 2025 19:31:37 +0000 (0:00:00.329) 0:00:00.598 ********* 2025-08-29 19:34:52.061009 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-08-29 19:34:52.061017 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-08-29 19:34:52.061024 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-08-29 19:34:52.061031 | orchestrator | 2025-08-29 19:34:52.061038 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-08-29 19:34:52.061045 | orchestrator | 2025-08-29 19:34:52.061052 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-08-29 19:34:52.061059 | orchestrator | Friday 29 August 2025 19:31:38 +0000 (0:00:00.461) 0:00:01.059 ********* 2025-08-29 19:34:52.061067 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:34:52.061075 | orchestrator | 2025-08-29 19:34:52.061094 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-08-29 19:34:52.061101 | orchestrator | Friday 29 August 2025 19:31:38 +0000 (0:00:00.557) 0:00:01.617 ********* 2025-08-29 19:34:52.061108 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 19:34:52.061114 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 19:34:52.061121 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 19:34:52.061127 | orchestrator | 2025-08-29 19:34:52.061134 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-08-29 19:34:52.061141 | orchestrator | Friday 29 August 2025 19:31:39 +0000 (0:00:00.615) 0:00:02.232 ********* 2025-08-29 19:34:52.061150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 19:34:52.061160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 19:34:52.061188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 19:34:52.061198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 19:34:52.061211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 19:34:52.061219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 19:34:52.061231 | orchestrator | 2025-08-29 19:34:52.061238 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-08-29 19:34:52.061245 | orchestrator | Friday 29 August 2025 19:31:41 +0000 (0:00:01.709) 0:00:03.941 ********* 2025-08-29 19:34:52.061252 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:34:52.061259 | orchestrator | 2025-08-29 19:34:52.061265 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-08-29 19:34:52.061272 | orchestrator | Friday 29 August 2025 19:31:41 +0000 (0:00:00.514) 0:00:04.456 ********* 2025-08-29 19:34:52.061285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 19:34:52.061297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 19:34:52.061304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 19:34:52.061311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 19:34:52.061328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 19:34:52.061339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 19:34:52.061347 | orchestrator | 2025-08-29 19:34:52.061354 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-08-29 19:34:52.061361 | orchestrator | Friday 29 August 2025 19:31:44 +0000 (0:00:02.792) 0:00:07.248 ********* 2025-08-29 19:34:52.061368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 19:34:52.061430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 19:34:52.061444 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:34:52.061451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 19:34:52.061465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 19:34:52.061472 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:34:52.061484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 19:34:52.061492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 19:34:52.061504 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:34:52.061511 | orchestrator | 2025-08-29 19:34:52.061517 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-08-29 19:34:52.061524 | orchestrator | Friday 29 August 2025 19:31:46 +0000 (0:00:01.458) 0:00:08.707 ********* 2025-08-29 19:34:52.061531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 19:34:52.061544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 19:34:52.061551 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:34:52.061562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 19:34:52.061570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 19:34:52.061582 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:34:52.061589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 19:34:52.061965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 19:34:52.061993 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:34:52.062005 | orchestrator | 2025-08-29 19:34:52.062062 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-08-29 19:34:52.062078 | orchestrator | Friday 29 August 2025 19:31:47 +0000 (0:00:01.239) 0:00:09.946 ********* 2025-08-29 19:34:52.062098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 19:34:52.062121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 19:34:52.062129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 19:34:52.062143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 19:34:52.062155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 19:34:52.062163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 19:34:52.062175 | orchestrator | 2025-08-29 19:34:52.062182 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-08-29 19:34:52.062188 | orchestrator | Friday 29 August 2025 19:31:49 +0000 (0:00:02.506) 0:00:12.452 ********* 2025-08-29 19:34:52.062195 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:34:52.062202 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:34:52.062209 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:34:52.062216 | orchestrator | 2025-08-29 19:34:52.062222 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-08-29 19:34:52.062229 | orchestrator | Friday 29 August 2025 19:31:53 +0000 (0:00:03.319) 0:00:15.772 ********* 2025-08-29 19:34:52.062236 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:34:52.062243 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:34:52.062249 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:34:52.062256 | orchestrator | 2025-08-29 19:34:52.062262 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-08-29 19:34:52.062269 | orchestrator | Friday 29 August 2025 19:31:55 +0000 (0:00:02.042) 0:00:17.814 ********* 2025-08-29 19:34:52.062276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 19:34:52.062290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 19:34:52.062301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 19:34:52.062313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 19:34:52.062321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 19:34:52.062337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 19:34:52.062349 | orchestrator | 2025-08-29 19:34:52.062360 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-08-29 19:34:52.062371 | orchestrator | Friday 29 August 2025 19:31:57 +0000 (0:00:02.196) 0:00:20.011 ********* 2025-08-29 19:34:52.062382 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:34:52.062401 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:34:52.062413 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:34:52.062424 | orchestrator | 2025-08-29 19:34:52.062435 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-08-29 19:34:52.062446 | orchestrator | Friday 29 August 2025 19:31:57 +0000 (0:00:00.305) 0:00:20.316 ********* 2025-08-29 19:34:52.062459 | orchestrator | 2025-08-29 19:34:52.062477 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-08-29 19:34:52.062489 | orchestrator | Friday 29 August 2025 19:31:57 +0000 (0:00:00.066) 0:00:20.383 ********* 2025-08-29 19:34:52.062499 | orchestrator | 2025-08-29 19:34:52.062511 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-08-29 19:34:52.062521 | orchestrator | Friday 29 August 2025 19:31:57 +0000 (0:00:00.067) 0:00:20.450 ********* 2025-08-29 19:34:52.062532 | orchestrator | 2025-08-29 19:34:52.062543 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-08-29 19:34:52.062555 | orchestrator | Friday 29 August 2025 19:31:57 +0000 (0:00:00.065) 0:00:20.516 ********* 2025-08-29 19:34:52.062567 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:34:52.062580 | orchestrator | 2025-08-29 19:34:52.062592 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-08-29 19:34:52.062604 | orchestrator | Friday 29 August 2025 19:31:58 +0000 (0:00:00.204) 0:00:20.720 ********* 2025-08-29 19:34:52.062616 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:34:52.062627 | orchestrator | 2025-08-29 19:34:52.062638 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-08-29 19:34:52.062650 | orchestrator | Friday 29 August 2025 19:31:58 +0000 (0:00:00.666) 0:00:21.387 ********* 2025-08-29 19:34:52.062662 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:34:52.062674 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:34:52.062686 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:34:52.062697 | orchestrator | 2025-08-29 19:34:52.062709 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-08-29 19:34:52.062741 | orchestrator | Friday 29 August 2025 19:33:08 +0000 (0:01:09.642) 0:01:31.029 ********* 2025-08-29 19:34:52.062753 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:34:52.062765 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:34:52.062777 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:34:52.062789 | orchestrator | 2025-08-29 19:34:52.062800 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-08-29 19:34:52.062812 | orchestrator | Friday 29 August 2025 19:34:38 +0000 (0:01:30.113) 0:03:01.142 ********* 2025-08-29 19:34:52.062825 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:34:52.062837 | orchestrator | 2025-08-29 19:34:52.062849 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-08-29 19:34:52.062860 | orchestrator | Friday 29 August 2025 19:34:38 +0000 (0:00:00.466) 0:03:01.609 ********* 2025-08-29 19:34:52.062872 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:34:52.062884 | orchestrator | 2025-08-29 19:34:52.062896 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-08-29 19:34:52.062908 | orchestrator | Friday 29 August 2025 19:34:41 +0000 (0:00:02.683) 0:03:04.292 ********* 2025-08-29 19:34:52.062919 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:34:52.062930 | orchestrator | 2025-08-29 19:34:52.062941 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-08-29 19:34:52.062952 | orchestrator | Friday 29 August 2025 19:34:43 +0000 (0:00:02.324) 0:03:06.617 ********* 2025-08-29 19:34:52.062963 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:34:52.062974 | orchestrator | 2025-08-29 19:34:52.062986 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-08-29 19:34:52.062997 | orchestrator | Friday 29 August 2025 19:34:46 +0000 (0:00:02.856) 0:03:09.473 ********* 2025-08-29 19:34:52.063009 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:34:52.063032 | orchestrator | 2025-08-29 19:34:52.063043 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:34:52.063056 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 19:34:52.063069 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 19:34:52.063080 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 19:34:52.063091 | orchestrator | 2025-08-29 19:34:52.063102 | orchestrator | 2025-08-29 19:34:52.063113 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:34:52.063132 | orchestrator | Friday 29 August 2025 19:34:49 +0000 (0:00:02.544) 0:03:12.018 ********* 2025-08-29 19:34:52.063143 | orchestrator | =============================================================================== 2025-08-29 19:34:52.063154 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 90.11s 2025-08-29 19:34:52.063165 | orchestrator | opensearch : Restart opensearch container ------------------------------ 69.64s 2025-08-29 19:34:52.063176 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.32s 2025-08-29 19:34:52.063187 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.86s 2025-08-29 19:34:52.063198 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.79s 2025-08-29 19:34:52.063209 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.68s 2025-08-29 19:34:52.063220 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.54s 2025-08-29 19:34:52.063231 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.51s 2025-08-29 19:34:52.063242 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.32s 2025-08-29 19:34:52.063253 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.20s 2025-08-29 19:34:52.063264 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.04s 2025-08-29 19:34:52.063274 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.71s 2025-08-29 19:34:52.063291 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.46s 2025-08-29 19:34:52.063301 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.24s 2025-08-29 19:34:52.063313 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.67s 2025-08-29 19:34:52.063324 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.62s 2025-08-29 19:34:52.063335 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.56s 2025-08-29 19:34:52.063346 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.51s 2025-08-29 19:34:52.063356 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.47s 2025-08-29 19:34:52.063367 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.46s 2025-08-29 19:34:52.063378 | orchestrator | 2025-08-29 19:34:52 | INFO  | Task 55572be6-7ed6-45a0-9016-04fa1c7f9960 is in state SUCCESS 2025-08-29 19:34:52.063389 | orchestrator | 2025-08-29 19:34:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:34:52.063400 | orchestrator | 2025-08-29 19:34:52.063411 | orchestrator | 2025-08-29 19:34:52.063422 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-08-29 19:34:52.063433 | orchestrator | 2025-08-29 19:34:52.063445 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-08-29 19:34:52.063455 | orchestrator | Friday 29 August 2025 19:31:37 +0000 (0:00:00.102) 0:00:00.102 ********* 2025-08-29 19:34:52.063466 | orchestrator | ok: [localhost] => { 2025-08-29 19:34:52.063477 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-08-29 19:34:52.063495 | orchestrator | } 2025-08-29 19:34:52.063506 | orchestrator | 2025-08-29 19:34:52.063517 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-08-29 19:34:52.063528 | orchestrator | Friday 29 August 2025 19:31:37 +0000 (0:00:00.057) 0:00:00.159 ********* 2025-08-29 19:34:52.063539 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-08-29 19:34:52.063551 | orchestrator | ...ignoring 2025-08-29 19:34:52.063562 | orchestrator | 2025-08-29 19:34:52.063573 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-08-29 19:34:52.063584 | orchestrator | Friday 29 August 2025 19:31:40 +0000 (0:00:02.914) 0:00:03.074 ********* 2025-08-29 19:34:52.063595 | orchestrator | skipping: [localhost] 2025-08-29 19:34:52.063606 | orchestrator | 2025-08-29 19:34:52.063617 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-08-29 19:34:52.063628 | orchestrator | Friday 29 August 2025 19:31:40 +0000 (0:00:00.059) 0:00:03.133 ********* 2025-08-29 19:34:52.063639 | orchestrator | ok: [localhost] 2025-08-29 19:34:52.063650 | orchestrator | 2025-08-29 19:34:52.063661 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 19:34:52.063672 | orchestrator | 2025-08-29 19:34:52.063683 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 19:34:52.063694 | orchestrator | Friday 29 August 2025 19:31:40 +0000 (0:00:00.145) 0:00:03.278 ********* 2025-08-29 19:34:52.063705 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:34:52.063770 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:34:52.063781 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:34:52.063792 | orchestrator | 2025-08-29 19:34:52.063804 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 19:34:52.063814 | orchestrator | Friday 29 August 2025 19:31:40 +0000 (0:00:00.326) 0:00:03.604 ********* 2025-08-29 19:34:52.063825 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-08-29 19:34:52.063837 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-08-29 19:34:52.063848 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-08-29 19:34:52.063859 | orchestrator | 2025-08-29 19:34:52.063870 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-08-29 19:34:52.063881 | orchestrator | 2025-08-29 19:34:52.063892 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-08-29 19:34:52.063903 | orchestrator | Friday 29 August 2025 19:31:41 +0000 (0:00:00.527) 0:00:04.132 ********* 2025-08-29 19:34:52.063920 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 19:34:52.063931 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-08-29 19:34:52.063942 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-08-29 19:34:52.063953 | orchestrator | 2025-08-29 19:34:52.063963 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-08-29 19:34:52.063974 | orchestrator | Friday 29 August 2025 19:31:41 +0000 (0:00:00.400) 0:00:04.533 ********* 2025-08-29 19:34:52.063985 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:34:52.063997 | orchestrator | 2025-08-29 19:34:52.064007 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-08-29 19:34:52.064018 | orchestrator | Friday 29 August 2025 19:31:42 +0000 (0:00:00.663) 0:00:05.196 ********* 2025-08-29 19:34:52.064036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 19:34:52.064064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 19:34:52.064084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 19:34:52.064102 | orchestrator | 2025-08-29 19:34:52.064114 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-08-29 19:34:52.064125 | orchestrator | Friday 29 August 2025 19:31:46 +0000 (0:00:03.686) 0:00:08.883 ********* 2025-08-29 19:34:52.064136 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:34:52.064147 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:34:52.064158 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:34:52.064169 | orchestrator | 2025-08-29 19:34:52.064180 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-08-29 19:34:52.064191 | orchestrator | Friday 29 August 2025 19:31:47 +0000 (0:00:00.780) 0:00:09.664 ********* 2025-08-29 19:34:52.064202 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:34:52.064213 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:34:52.064224 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:34:52.064235 | orchestrator | 2025-08-29 19:34:52.064246 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-08-29 19:34:52.064257 | orchestrator | Friday 29 August 2025 19:31:48 +0000 (0:00:01.396) 0:00:11.060 ********* 2025-08-29 19:34:52.064275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 19:34:52.064293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 19:34:52.064317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 19:34:52.064330 | orchestrator | 2025-08-29 19:34:52.064341 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-08-29 19:34:52.064357 | orchestrator | Friday 29 August 2025 19:31:52 +0000 (0:00:04.309) 0:00:15.369 ********* 2025-08-29 19:34:52.064368 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:34:52.064379 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:34:52.064390 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:34:52.064401 | orchestrator | 2025-08-29 19:34:52.064412 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-08-29 19:34:52.064428 | orchestrator | Friday 29 August 2025 19:31:53 +0000 (0:00:01.082) 0:00:16.451 ********* 2025-08-29 19:34:52.064439 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:34:52.064450 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:34:52.064461 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:34:52.064474 | orchestrator | 2025-08-29 19:34:52.064485 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-08-29 19:34:52.064497 | orchestrator | Friday 29 August 2025 19:31:58 +0000 (0:00:04.627) 0:00:21.079 ********* 2025-08-29 19:34:52.064509 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:34:52.064521 | orchestrator | 2025-08-29 19:34:52.064533 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-08-29 19:34:52.064545 | orchestrator | Friday 29 August 2025 19:31:58 +0000 (0:00:00.534) 0:00:21.613 ********* 2025-08-29 19:34:52.064562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 19:34:52.064576 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:34:52.064595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 19:34:52.064618 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:34:52.064635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 19:34:52.064647 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:34:52.064659 | orchestrator | 2025-08-29 19:34:52.064671 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-08-29 19:34:52.064683 | orchestrator | Friday 29 August 2025 19:32:03 +0000 (0:00:04.278) 0:00:25.892 ********* 2025-08-29 19:34:52.064703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 19:34:52.064743 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:34:52.064762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 19:34:52.064774 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:34:52.064787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 19:34:52.064805 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:34:52.064816 | orchestrator | 2025-08-29 19:34:52.064827 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-08-29 19:34:52.064844 | orchestrator | Friday 29 August 2025 19:32:05 +0000 (0:00:02.085) 0:00:27.977 ********* 2025-08-29 19:34:52.064861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 19:34:52.064873 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:34:52.064885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 19:34:52.064903 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:34:52.064927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 19:34:52.064940 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:34:52.064951 | orchestrator | 2025-08-29 19:34:52.064962 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-08-29 19:34:52.064972 | orchestrator | Friday 29 August 2025 19:32:07 +0000 (0:00:02.298) 0:00:30.276 ********* 2025-08-29 19:34:52.064984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 19:34:52.065015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 19:34:52.065028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 19:34:52.065040 | orchestrator | 2025-08-29 19:34:52.065051 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-08-29 19:34:52.065068 | orchestrator | Friday 29 August 2025 19:32:10 +0000 (0:00:03.224) 0:00:33.500 ********* 2025-08-29 19:34:52.065079 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:34:52.065090 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:34:52.065101 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:34:52.065112 | orchestrator | 2025-08-29 19:34:52.065123 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-08-29 19:34:52.065134 | orchestrator | Friday 29 August 2025 19:32:11 +0000 (0:00:00.939) 0:00:34.439 ********* 2025-08-29 19:34:52.065145 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:34:52.065157 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:34:52.065167 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:34:52.065179 | orchestrator | 2025-08-29 19:34:52.065189 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-08-29 19:34:52.065201 | orchestrator | Friday 29 August 2025 19:32:12 +0000 (0:00:00.810) 0:00:35.250 ********* 2025-08-29 19:34:52.065213 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:34:52.065226 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:34:52.065237 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:34:52.065249 | orchestrator | 2025-08-29 19:34:52.065260 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-08-29 19:34:52.065271 | orchestrator | Friday 29 August 2025 19:32:13 +0000 (0:00:00.463) 0:00:35.713 ********* 2025-08-29 19:34:52.065288 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-08-29 19:34:52.065301 | orchestrator | ...ignoring 2025-08-29 19:34:52.065315 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-08-29 19:34:52.065326 | orchestrator | ...ignoring 2025-08-29 19:34:52.065337 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-08-29 19:34:52.065347 | orchestrator | ...ignoring 2025-08-29 19:34:52.065358 | orchestrator | 2025-08-29 19:34:52.065368 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-08-29 19:34:52.065377 | orchestrator | Friday 29 August 2025 19:32:24 +0000 (0:00:10.973) 0:00:46.686 ********* 2025-08-29 19:34:52.065387 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:34:52.065398 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:34:52.065408 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:34:52.065418 | orchestrator | 2025-08-29 19:34:52.065429 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-08-29 19:34:52.065438 | orchestrator | Friday 29 August 2025 19:32:24 +0000 (0:00:00.515) 0:00:47.202 ********* 2025-08-29 19:34:52.065448 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:34:52.065457 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:34:52.065467 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:34:52.065476 | orchestrator | 2025-08-29 19:34:52.065486 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-08-29 19:34:52.065498 | orchestrator | Friday 29 August 2025 19:32:25 +0000 (0:00:00.662) 0:00:47.864 ********* 2025-08-29 19:34:52.065508 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:34:52.065518 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:34:52.065528 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:34:52.065539 | orchestrator | 2025-08-29 19:34:52.065549 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-08-29 19:34:52.065560 | orchestrator | Friday 29 August 2025 19:32:25 +0000 (0:00:00.499) 0:00:48.363 ********* 2025-08-29 19:34:52.065570 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:34:52.065582 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:34:52.065594 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:34:52.065605 | orchestrator | 2025-08-29 19:34:52.065616 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-08-29 19:34:52.065638 | orchestrator | Friday 29 August 2025 19:32:26 +0000 (0:00:00.416) 0:00:48.779 ********* 2025-08-29 19:34:52.065649 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:34:52.065660 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:34:52.065671 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:34:52.065682 | orchestrator | 2025-08-29 19:34:52.065693 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-08-29 19:34:52.065704 | orchestrator | Friday 29 August 2025 19:32:26 +0000 (0:00:00.502) 0:00:49.282 ********* 2025-08-29 19:34:52.065776 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:34:52.065790 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:34:52.065801 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:34:52.065812 | orchestrator | 2025-08-29 19:34:52.065823 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-08-29 19:34:52.065834 | orchestrator | Friday 29 August 2025 19:32:27 +0000 (0:00:00.890) 0:00:50.172 ********* 2025-08-29 19:34:52.065932 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:34:52.065951 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:34:52.065962 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-08-29 19:34:52.065973 | orchestrator | 2025-08-29 19:34:52.065985 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-08-29 19:34:52.065996 | orchestrator | Friday 29 August 2025 19:32:27 +0000 (0:00:00.394) 0:00:50.567 ********* 2025-08-29 19:34:52.066007 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:34:52.066064 | orchestrator | 2025-08-29 19:34:52.066078 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-08-29 19:34:52.066090 | orchestrator | Friday 29 August 2025 19:32:38 +0000 (0:00:10.439) 0:01:01.007 ********* 2025-08-29 19:34:52.066101 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:34:52.066112 | orchestrator | 2025-08-29 19:34:52.066123 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-08-29 19:34:52.066135 | orchestrator | Friday 29 August 2025 19:32:38 +0000 (0:00:00.130) 0:01:01.138 ********* 2025-08-29 19:34:52.066146 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:34:52.066157 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:34:52.066168 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:34:52.066179 | orchestrator | 2025-08-29 19:34:52.066190 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-08-29 19:34:52.066201 | orchestrator | Friday 29 August 2025 19:32:39 +0000 (0:00:01.021) 0:01:02.159 ********* 2025-08-29 19:34:52.066212 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:34:52.066224 | orchestrator | 2025-08-29 19:34:52.066235 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-08-29 19:34:52.066246 | orchestrator | Friday 29 August 2025 19:32:47 +0000 (0:00:07.830) 0:01:09.989 ********* 2025-08-29 19:34:52.066257 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:34:52.066268 | orchestrator | 2025-08-29 19:34:52.066279 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-08-29 19:34:52.066290 | orchestrator | Friday 29 August 2025 19:32:48 +0000 (0:00:01.638) 0:01:11.628 ********* 2025-08-29 19:34:52.066301 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:34:52.066312 | orchestrator | 2025-08-29 19:34:52.066323 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-08-29 19:34:52.066335 | orchestrator | Friday 29 August 2025 19:32:51 +0000 (0:00:02.588) 0:01:14.216 ********* 2025-08-29 19:34:52.066346 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:34:52.066357 | orchestrator | 2025-08-29 19:34:52.066368 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-08-29 19:34:52.066379 | orchestrator | Friday 29 August 2025 19:32:51 +0000 (0:00:00.129) 0:01:14.345 ********* 2025-08-29 19:34:52.066390 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:34:52.066413 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:34:52.066425 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:34:52.066444 | orchestrator | 2025-08-29 19:34:52.066455 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-08-29 19:34:52.066466 | orchestrator | Friday 29 August 2025 19:32:52 +0000 (0:00:00.320) 0:01:14.665 ********* 2025-08-29 19:34:52.066477 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:34:52.066488 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-08-29 19:34:52.066499 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:34:52.066510 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:34:52.066521 | orchestrator | 2025-08-29 19:34:52.066532 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-08-29 19:34:52.066543 | orchestrator | skipping: no hosts matched 2025-08-29 19:34:52.066554 | orchestrator | 2025-08-29 19:34:52.066565 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-08-29 19:34:52.066577 | orchestrator | 2025-08-29 19:34:52.066588 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-08-29 19:34:52.066599 | orchestrator | Friday 29 August 2025 19:32:52 +0000 (0:00:00.517) 0:01:15.183 ********* 2025-08-29 19:34:52.066610 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:34:52.066621 | orchestrator | 2025-08-29 19:34:52.066632 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-08-29 19:34:52.066643 | orchestrator | Friday 29 August 2025 19:33:17 +0000 (0:00:24.520) 0:01:39.703 ********* 2025-08-29 19:34:52.066654 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:34:52.066665 | orchestrator | 2025-08-29 19:34:52.066676 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-08-29 19:34:52.066692 | orchestrator | Friday 29 August 2025 19:33:33 +0000 (0:00:16.590) 0:01:56.294 ********* 2025-08-29 19:34:52.066703 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:34:52.066730 | orchestrator | 2025-08-29 19:34:52.066742 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-08-29 19:34:52.066753 | orchestrator | 2025-08-29 19:34:52.066764 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-08-29 19:34:52.066776 | orchestrator | Friday 29 August 2025 19:33:35 +0000 (0:00:02.195) 0:01:58.490 ********* 2025-08-29 19:34:52.066787 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:34:52.066799 | orchestrator | 2025-08-29 19:34:52.066810 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-08-29 19:34:52.066822 | orchestrator | Friday 29 August 2025 19:33:56 +0000 (0:00:20.380) 0:02:18.870 ********* 2025-08-29 19:34:52.066834 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:34:52.066846 | orchestrator | 2025-08-29 19:34:52.066858 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-08-29 19:34:52.066870 | orchestrator | Friday 29 August 2025 19:34:16 +0000 (0:00:20.617) 0:02:39.487 ********* 2025-08-29 19:34:52.066882 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:34:52.066894 | orchestrator | 2025-08-29 19:34:52.066905 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-08-29 19:34:52.066916 | orchestrator | 2025-08-29 19:34:52.066927 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-08-29 19:34:52.066939 | orchestrator | Friday 29 August 2025 19:34:19 +0000 (0:00:02.265) 0:02:41.753 ********* 2025-08-29 19:34:52.066950 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:34:52.066961 | orchestrator | 2025-08-29 19:34:52.066972 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-08-29 19:34:52.066983 | orchestrator | Friday 29 August 2025 19:34:35 +0000 (0:00:16.111) 0:02:57.864 ********* 2025-08-29 19:34:52.066995 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:34:52.067008 | orchestrator | 2025-08-29 19:34:52.067020 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-08-29 19:34:52.067032 | orchestrator | Friday 29 August 2025 19:34:35 +0000 (0:00:00.537) 0:02:58.402 ********* 2025-08-29 19:34:52.067044 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:34:52.067055 | orchestrator | 2025-08-29 19:34:52.067067 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-08-29 19:34:52.067086 | orchestrator | 2025-08-29 19:34:52.067097 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-08-29 19:34:52.067108 | orchestrator | Friday 29 August 2025 19:34:38 +0000 (0:00:02.402) 0:03:00.804 ********* 2025-08-29 19:34:52.067119 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:34:52.067130 | orchestrator | 2025-08-29 19:34:52.067141 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-08-29 19:34:52.067153 | orchestrator | Friday 29 August 2025 19:34:38 +0000 (0:00:00.466) 0:03:01.271 ********* 2025-08-29 19:34:52.067162 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:34:52.067173 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:34:52.067184 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:34:52.067195 | orchestrator | 2025-08-29 19:34:52.067206 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-08-29 19:34:52.067218 | orchestrator | Friday 29 August 2025 19:34:40 +0000 (0:00:02.235) 0:03:03.506 ********* 2025-08-29 19:34:52.067229 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:34:52.067240 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:34:52.067252 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:34:52.067264 | orchestrator | 2025-08-29 19:34:52.067275 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-08-29 19:34:52.067286 | orchestrator | Friday 29 August 2025 19:34:43 +0000 (0:00:02.239) 0:03:05.746 ********* 2025-08-29 19:34:52.067296 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:34:52.067302 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:34:52.067309 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:34:52.067316 | orchestrator | 2025-08-29 19:34:52.067322 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-08-29 19:34:52.067329 | orchestrator | Friday 29 August 2025 19:34:45 +0000 (0:00:02.272) 0:03:08.018 ********* 2025-08-29 19:34:52.067335 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:34:52.067342 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:34:52.067349 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:34:52.067355 | orchestrator | 2025-08-29 19:34:52.067362 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-08-29 19:34:52.067376 | orchestrator | Friday 29 August 2025 19:34:47 +0000 (0:00:02.104) 0:03:10.122 ********* 2025-08-29 19:34:52.067383 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:34:52.067390 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:34:52.067396 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:34:52.067403 | orchestrator | 2025-08-29 19:34:52.067409 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-08-29 19:34:52.067416 | orchestrator | Friday 29 August 2025 19:34:50 +0000 (0:00:02.900) 0:03:13.023 ********* 2025-08-29 19:34:52.067423 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:34:52.067429 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:34:52.067436 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:34:52.067442 | orchestrator | 2025-08-29 19:34:52.067449 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:34:52.067456 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-08-29 19:34:52.067464 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-08-29 19:34:52.067471 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-08-29 19:34:52.067483 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-08-29 19:34:52.067490 | orchestrator | 2025-08-29 19:34:52.067497 | orchestrator | 2025-08-29 19:34:52.067510 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:34:52.067516 | orchestrator | Friday 29 August 2025 19:34:50 +0000 (0:00:00.444) 0:03:13.468 ********* 2025-08-29 19:34:52.067523 | orchestrator | =============================================================================== 2025-08-29 19:34:52.067530 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 44.90s 2025-08-29 19:34:52.067541 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 37.21s 2025-08-29 19:34:52.067552 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 16.11s 2025-08-29 19:34:52.067563 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.97s 2025-08-29 19:34:52.067574 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.44s 2025-08-29 19:34:52.067586 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.83s 2025-08-29 19:34:52.067597 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.63s 2025-08-29 19:34:52.067610 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.46s 2025-08-29 19:34:52.067622 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.31s 2025-08-29 19:34:52.067635 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 4.28s 2025-08-29 19:34:52.067647 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.69s 2025-08-29 19:34:52.067657 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.22s 2025-08-29 19:34:52.067667 | orchestrator | Check MariaDB service --------------------------------------------------- 2.91s 2025-08-29 19:34:52.067677 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.90s 2025-08-29 19:34:52.067688 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.59s 2025-08-29 19:34:52.067699 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.40s 2025-08-29 19:34:52.067709 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.30s 2025-08-29 19:34:52.067738 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.27s 2025-08-29 19:34:52.067749 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.24s 2025-08-29 19:34:52.067759 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.24s 2025-08-29 19:34:55.113818 | orchestrator | 2025-08-29 19:34:55 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:34:55.114948 | orchestrator | 2025-08-29 19:34:55 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:34:55.117279 | orchestrator | 2025-08-29 19:34:55 | INFO  | Task 35694d26-4aa2-44f1-8609-b635bf9e0bcd is in state STARTED 2025-08-29 19:34:55.117307 | orchestrator | 2025-08-29 19:34:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:34:58.163050 | orchestrator | 2025-08-29 19:34:58 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:34:58.164124 | orchestrator | 2025-08-29 19:34:58 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:34:58.166225 | orchestrator | 2025-08-29 19:34:58 | INFO  | Task 35694d26-4aa2-44f1-8609-b635bf9e0bcd is in state STARTED 2025-08-29 19:34:58.166277 | orchestrator | 2025-08-29 19:34:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:35:01.197852 | orchestrator | 2025-08-29 19:35:01 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:35:01.198670 | orchestrator | 2025-08-29 19:35:01 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:35:01.199954 | orchestrator | 2025-08-29 19:35:01 | INFO  | Task 35694d26-4aa2-44f1-8609-b635bf9e0bcd is in state STARTED 2025-08-29 19:35:01.200071 | orchestrator | 2025-08-29 19:35:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:35:04.236364 | orchestrator | 2025-08-29 19:35:04 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:35:04.236558 | orchestrator | 2025-08-29 19:35:04 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:35:04.237635 | orchestrator | 2025-08-29 19:35:04 | INFO  | Task 35694d26-4aa2-44f1-8609-b635bf9e0bcd is in state STARTED 2025-08-29 19:35:04.238224 | orchestrator | 2025-08-29 19:35:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:35:07.280997 | orchestrator | 2025-08-29 19:35:07 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:35:07.283409 | orchestrator | 2025-08-29 19:35:07 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:35:07.284893 | orchestrator | 2025-08-29 19:35:07 | INFO  | Task 35694d26-4aa2-44f1-8609-b635bf9e0bcd is in state STARTED 2025-08-29 19:35:07.285287 | orchestrator | 2025-08-29 19:35:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:35:10.333373 | orchestrator | 2025-08-29 19:35:10 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:35:10.333451 | orchestrator | 2025-08-29 19:35:10 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:35:10.333951 | orchestrator | 2025-08-29 19:35:10 | INFO  | Task 35694d26-4aa2-44f1-8609-b635bf9e0bcd is in state STARTED 2025-08-29 19:35:10.335156 | orchestrator | 2025-08-29 19:35:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:35:13.369534 | orchestrator | 2025-08-29 19:35:13 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:35:13.372559 | orchestrator | 2025-08-29 19:35:13 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:35:13.373364 | orchestrator | 2025-08-29 19:35:13 | INFO  | Task 35694d26-4aa2-44f1-8609-b635bf9e0bcd is in state STARTED 2025-08-29 19:35:13.374136 | orchestrator | 2025-08-29 19:35:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:35:16.419282 | orchestrator | 2025-08-29 19:35:16 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:35:16.420011 | orchestrator | 2025-08-29 19:35:16 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:35:16.421329 | orchestrator | 2025-08-29 19:35:16 | INFO  | Task 35694d26-4aa2-44f1-8609-b635bf9e0bcd is in state STARTED 2025-08-29 19:35:16.421356 | orchestrator | 2025-08-29 19:35:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:35:19.462893 | orchestrator | 2025-08-29 19:35:19 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:35:19.465023 | orchestrator | 2025-08-29 19:35:19 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:35:19.466984 | orchestrator | 2025-08-29 19:35:19 | INFO  | Task 35694d26-4aa2-44f1-8609-b635bf9e0bcd is in state STARTED 2025-08-29 19:35:19.467028 | orchestrator | 2025-08-29 19:35:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:35:22.519430 | orchestrator | 2025-08-29 19:35:22 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:35:22.521169 | orchestrator | 2025-08-29 19:35:22 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:35:22.522955 | orchestrator | 2025-08-29 19:35:22 | INFO  | Task 35694d26-4aa2-44f1-8609-b635bf9e0bcd is in state STARTED 2025-08-29 19:35:22.523010 | orchestrator | 2025-08-29 19:35:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:35:25.557669 | orchestrator | 2025-08-29 19:35:25 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:35:25.558813 | orchestrator | 2025-08-29 19:35:25 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:35:25.560671 | orchestrator | 2025-08-29 19:35:25 | INFO  | Task 35694d26-4aa2-44f1-8609-b635bf9e0bcd is in state STARTED 2025-08-29 19:35:25.560742 | orchestrator | 2025-08-29 19:35:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:35:28.614400 | orchestrator | 2025-08-29 19:35:28 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:35:28.616600 | orchestrator | 2025-08-29 19:35:28 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:35:28.618298 | orchestrator | 2025-08-29 19:35:28 | INFO  | Task 35694d26-4aa2-44f1-8609-b635bf9e0bcd is in state STARTED 2025-08-29 19:35:28.618379 | orchestrator | 2025-08-29 19:35:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:35:31.658011 | orchestrator | 2025-08-29 19:35:31 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:35:31.658292 | orchestrator | 2025-08-29 19:35:31 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:35:31.659627 | orchestrator | 2025-08-29 19:35:31 | INFO  | Task 35694d26-4aa2-44f1-8609-b635bf9e0bcd is in state STARTED 2025-08-29 19:35:31.659755 | orchestrator | 2025-08-29 19:35:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:35:34.703838 | orchestrator | 2025-08-29 19:35:34 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:35:34.704048 | orchestrator | 2025-08-29 19:35:34 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:35:34.706505 | orchestrator | 2025-08-29 19:35:34 | INFO  | Task 35694d26-4aa2-44f1-8609-b635bf9e0bcd is in state STARTED 2025-08-29 19:35:34.707649 | orchestrator | 2025-08-29 19:35:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:35:37.755329 | orchestrator | 2025-08-29 19:35:37 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:35:37.758086 | orchestrator | 2025-08-29 19:35:37 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:35:37.760583 | orchestrator | 2025-08-29 19:35:37 | INFO  | Task 35694d26-4aa2-44f1-8609-b635bf9e0bcd is in state STARTED 2025-08-29 19:35:37.760726 | orchestrator | 2025-08-29 19:35:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:35:40.798767 | orchestrator | 2025-08-29 19:35:40 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:35:40.800246 | orchestrator | 2025-08-29 19:35:40 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:35:40.801390 | orchestrator | 2025-08-29 19:35:40 | INFO  | Task 35694d26-4aa2-44f1-8609-b635bf9e0bcd is in state STARTED 2025-08-29 19:35:40.801426 | orchestrator | 2025-08-29 19:35:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:35:43.859017 | orchestrator | 2025-08-29 19:35:43 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:35:43.860930 | orchestrator | 2025-08-29 19:35:43 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:35:43.864216 | orchestrator | 2025-08-29 19:35:43 | INFO  | Task 35694d26-4aa2-44f1-8609-b635bf9e0bcd is in state STARTED 2025-08-29 19:35:43.864269 | orchestrator | 2025-08-29 19:35:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:35:46.902405 | orchestrator | 2025-08-29 19:35:46 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:35:46.906188 | orchestrator | 2025-08-29 19:35:46 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:35:46.908722 | orchestrator | 2025-08-29 19:35:46 | INFO  | Task 35694d26-4aa2-44f1-8609-b635bf9e0bcd is in state STARTED 2025-08-29 19:35:46.908752 | orchestrator | 2025-08-29 19:35:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:35:49.959624 | orchestrator | 2025-08-29 19:35:49 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:35:49.962135 | orchestrator | 2025-08-29 19:35:49 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:35:49.964742 | orchestrator | 2025-08-29 19:35:49 | INFO  | Task 35694d26-4aa2-44f1-8609-b635bf9e0bcd is in state STARTED 2025-08-29 19:35:49.964825 | orchestrator | 2025-08-29 19:35:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:35:53.009434 | orchestrator | 2025-08-29 19:35:53 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:35:53.010384 | orchestrator | 2025-08-29 19:35:53 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:35:53.011775 | orchestrator | 2025-08-29 19:35:53 | INFO  | Task 35694d26-4aa2-44f1-8609-b635bf9e0bcd is in state STARTED 2025-08-29 19:35:53.011802 | orchestrator | 2025-08-29 19:35:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:35:56.055455 | orchestrator | 2025-08-29 19:35:56 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:35:56.056285 | orchestrator | 2025-08-29 19:35:56 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:35:56.057158 | orchestrator | 2025-08-29 19:35:56 | INFO  | Task 35694d26-4aa2-44f1-8609-b635bf9e0bcd is in state STARTED 2025-08-29 19:35:56.057193 | orchestrator | 2025-08-29 19:35:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:35:59.107087 | orchestrator | 2025-08-29 19:35:59 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:35:59.110935 | orchestrator | 2025-08-29 19:35:59 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:35:59.114203 | orchestrator | 2025-08-29 19:35:59 | INFO  | Task 35694d26-4aa2-44f1-8609-b635bf9e0bcd is in state STARTED 2025-08-29 19:35:59.114247 | orchestrator | 2025-08-29 19:35:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:36:02.160288 | orchestrator | 2025-08-29 19:36:02 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:36:02.161507 | orchestrator | 2025-08-29 19:36:02 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:36:02.163233 | orchestrator | 2025-08-29 19:36:02 | INFO  | Task 35694d26-4aa2-44f1-8609-b635bf9e0bcd is in state STARTED 2025-08-29 19:36:02.163274 | orchestrator | 2025-08-29 19:36:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:36:05.218551 | orchestrator | 2025-08-29 19:36:05 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state STARTED 2025-08-29 19:36:05.218691 | orchestrator | 2025-08-29 19:36:05 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:36:05.221395 | orchestrator | 2025-08-29 19:36:05 | INFO  | Task 35694d26-4aa2-44f1-8609-b635bf9e0bcd is in state STARTED 2025-08-29 19:36:05.222121 | orchestrator | 2025-08-29 19:36:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:36:08.275542 | orchestrator | 2025-08-29 19:36:08.275772 | orchestrator | 2025-08-29 19:36:08 | INFO  | Task d99ee1c3-b383-441c-b198-d50733546183 is in state SUCCESS 2025-08-29 19:36:08.278322 | orchestrator | 2025-08-29 19:36:08.278395 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-08-29 19:36:08.278424 | orchestrator | 2025-08-29 19:36:08.278446 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-08-29 19:36:08.278466 | orchestrator | Friday 29 August 2025 19:33:54 +0000 (0:00:00.598) 0:00:00.598 ********* 2025-08-29 19:36:08.278485 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:36:08.278505 | orchestrator | 2025-08-29 19:36:08.278526 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-08-29 19:36:08.278547 | orchestrator | Friday 29 August 2025 19:33:55 +0000 (0:00:00.638) 0:00:01.237 ********* 2025-08-29 19:36:08.278568 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:36:08.278588 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:36:08.278607 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:36:08.278658 | orchestrator | 2025-08-29 19:36:08.278677 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-08-29 19:36:08.278695 | orchestrator | Friday 29 August 2025 19:33:56 +0000 (0:00:00.648) 0:00:01.886 ********* 2025-08-29 19:36:08.279589 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:36:08.279652 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:36:08.279671 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:36:08.279690 | orchestrator | 2025-08-29 19:36:08.279708 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-08-29 19:36:08.279725 | orchestrator | Friday 29 August 2025 19:33:56 +0000 (0:00:00.304) 0:00:02.191 ********* 2025-08-29 19:36:08.279751 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:36:08.279777 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:36:08.279796 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:36:08.279812 | orchestrator | 2025-08-29 19:36:08.279830 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-08-29 19:36:08.279846 | orchestrator | Friday 29 August 2025 19:33:57 +0000 (0:00:00.790) 0:00:02.981 ********* 2025-08-29 19:36:08.279862 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:36:08.279879 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:36:08.279929 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:36:08.279948 | orchestrator | 2025-08-29 19:36:08.279964 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-08-29 19:36:08.279982 | orchestrator | Friday 29 August 2025 19:33:57 +0000 (0:00:00.306) 0:00:03.288 ********* 2025-08-29 19:36:08.279999 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:36:08.280018 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:36:08.280034 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:36:08.280050 | orchestrator | 2025-08-29 19:36:08.280069 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-08-29 19:36:08.280088 | orchestrator | Friday 29 August 2025 19:33:57 +0000 (0:00:00.294) 0:00:03.583 ********* 2025-08-29 19:36:08.280106 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:36:08.280125 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:36:08.280142 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:36:08.280160 | orchestrator | 2025-08-29 19:36:08.280176 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-08-29 19:36:08.280194 | orchestrator | Friday 29 August 2025 19:33:58 +0000 (0:00:00.300) 0:00:03.883 ********* 2025-08-29 19:36:08.280211 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:36:08.280229 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:36:08.280247 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:36:08.280265 | orchestrator | 2025-08-29 19:36:08.280284 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-08-29 19:36:08.280303 | orchestrator | Friday 29 August 2025 19:33:58 +0000 (0:00:00.490) 0:00:04.374 ********* 2025-08-29 19:36:08.280321 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:36:08.280340 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:36:08.280360 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:36:08.280378 | orchestrator | 2025-08-29 19:36:08.280419 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-08-29 19:36:08.280438 | orchestrator | Friday 29 August 2025 19:33:58 +0000 (0:00:00.292) 0:00:04.666 ********* 2025-08-29 19:36:08.280457 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 19:36:08.280477 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 19:36:08.280495 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 19:36:08.280513 | orchestrator | 2025-08-29 19:36:08.280531 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-08-29 19:36:08.280550 | orchestrator | Friday 29 August 2025 19:33:59 +0000 (0:00:00.660) 0:00:05.327 ********* 2025-08-29 19:36:08.280568 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:36:08.280586 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:36:08.280604 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:36:08.280710 | orchestrator | 2025-08-29 19:36:08.280731 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-08-29 19:36:08.280750 | orchestrator | Friday 29 August 2025 19:34:00 +0000 (0:00:00.425) 0:00:05.752 ********* 2025-08-29 19:36:08.280768 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 19:36:08.280805 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 19:36:08.280825 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 19:36:08.280844 | orchestrator | 2025-08-29 19:36:08.280862 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-08-29 19:36:08.280879 | orchestrator | Friday 29 August 2025 19:34:02 +0000 (0:00:02.134) 0:00:07.887 ********* 2025-08-29 19:36:08.280898 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-08-29 19:36:08.280916 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-08-29 19:36:08.280934 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-08-29 19:36:08.280952 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:36:08.280970 | orchestrator | 2025-08-29 19:36:08.280988 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-08-29 19:36:08.281090 | orchestrator | Friday 29 August 2025 19:34:02 +0000 (0:00:00.404) 0:00:08.292 ********* 2025-08-29 19:36:08.281112 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.281133 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.281150 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.281168 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:36:08.281184 | orchestrator | 2025-08-29 19:36:08.281201 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-08-29 19:36:08.281219 | orchestrator | Friday 29 August 2025 19:34:03 +0000 (0:00:00.836) 0:00:09.128 ********* 2025-08-29 19:36:08.281239 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.281259 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.281290 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.281308 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:36:08.281326 | orchestrator | 2025-08-29 19:36:08.281343 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-08-29 19:36:08.281358 | orchestrator | Friday 29 August 2025 19:34:03 +0000 (0:00:00.170) 0:00:09.299 ********* 2025-08-29 19:36:08.281376 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '412d3aa3af7f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-08-29 19:34:00.685306', 'end': '2025-08-29 19:34:00.728852', 'delta': '0:00:00.043546', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['412d3aa3af7f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-08-29 19:36:08.281404 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'c0701a106efd', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-08-29 19:34:01.439062', 'end': '2025-08-29 19:34:01.491160', 'delta': '0:00:00.052098', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c0701a106efd'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-08-29 19:36:08.281474 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '76bfe401729b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-08-29 19:34:01.973329', 'end': '2025-08-29 19:34:02.023117', 'delta': '0:00:00.049788', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['76bfe401729b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-08-29 19:36:08.281493 | orchestrator | 2025-08-29 19:36:08.281511 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-08-29 19:36:08.281529 | orchestrator | Friday 29 August 2025 19:34:03 +0000 (0:00:00.376) 0:00:09.676 ********* 2025-08-29 19:36:08.281546 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:36:08.281563 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:36:08.281580 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:36:08.281597 | orchestrator | 2025-08-29 19:36:08.281615 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-08-29 19:36:08.281672 | orchestrator | Friday 29 August 2025 19:34:04 +0000 (0:00:00.428) 0:00:10.105 ********* 2025-08-29 19:36:08.281690 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-08-29 19:36:08.281707 | orchestrator | 2025-08-29 19:36:08.281725 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-08-29 19:36:08.281743 | orchestrator | Friday 29 August 2025 19:34:06 +0000 (0:00:01.681) 0:00:11.787 ********* 2025-08-29 19:36:08.281760 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:36:08.281777 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:36:08.281794 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:36:08.281811 | orchestrator | 2025-08-29 19:36:08.281828 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-08-29 19:36:08.281845 | orchestrator | Friday 29 August 2025 19:34:06 +0000 (0:00:00.300) 0:00:12.087 ********* 2025-08-29 19:36:08.281862 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:36:08.281880 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:36:08.281897 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:36:08.281914 | orchestrator | 2025-08-29 19:36:08.281932 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-08-29 19:36:08.281949 | orchestrator | Friday 29 August 2025 19:34:06 +0000 (0:00:00.444) 0:00:12.532 ********* 2025-08-29 19:36:08.281966 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:36:08.281984 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:36:08.282001 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:36:08.282058 | orchestrator | 2025-08-29 19:36:08.282077 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-08-29 19:36:08.282093 | orchestrator | Friday 29 August 2025 19:34:07 +0000 (0:00:00.493) 0:00:13.026 ********* 2025-08-29 19:36:08.282206 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:36:08.282224 | orchestrator | 2025-08-29 19:36:08.282240 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-08-29 19:36:08.282255 | orchestrator | Friday 29 August 2025 19:34:07 +0000 (0:00:00.141) 0:00:13.167 ********* 2025-08-29 19:36:08.282271 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:36:08.282286 | orchestrator | 2025-08-29 19:36:08.282301 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-08-29 19:36:08.282317 | orchestrator | Friday 29 August 2025 19:34:07 +0000 (0:00:00.222) 0:00:13.390 ********* 2025-08-29 19:36:08.282333 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:36:08.282348 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:36:08.282363 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:36:08.282378 | orchestrator | 2025-08-29 19:36:08.282393 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-08-29 19:36:08.282409 | orchestrator | Friday 29 August 2025 19:34:07 +0000 (0:00:00.279) 0:00:13.669 ********* 2025-08-29 19:36:08.282423 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:36:08.282439 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:36:08.282454 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:36:08.282470 | orchestrator | 2025-08-29 19:36:08.282485 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-08-29 19:36:08.282501 | orchestrator | Friday 29 August 2025 19:34:08 +0000 (0:00:00.298) 0:00:13.968 ********* 2025-08-29 19:36:08.282516 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:36:08.282531 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:36:08.282546 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:36:08.282561 | orchestrator | 2025-08-29 19:36:08.282577 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-08-29 19:36:08.282593 | orchestrator | Friday 29 August 2025 19:34:08 +0000 (0:00:00.524) 0:00:14.492 ********* 2025-08-29 19:36:08.282608 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:36:08.282646 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:36:08.282672 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:36:08.282689 | orchestrator | 2025-08-29 19:36:08.282705 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-08-29 19:36:08.282732 | orchestrator | Friday 29 August 2025 19:34:09 +0000 (0:00:00.355) 0:00:14.848 ********* 2025-08-29 19:36:08.282748 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:36:08.282763 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:36:08.282778 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:36:08.282794 | orchestrator | 2025-08-29 19:36:08.282809 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-08-29 19:36:08.282824 | orchestrator | Friday 29 August 2025 19:34:09 +0000 (0:00:00.306) 0:00:15.155 ********* 2025-08-29 19:36:08.282841 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:36:08.282857 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:36:08.282874 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:36:08.282890 | orchestrator | 2025-08-29 19:36:08.282907 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-08-29 19:36:08.282980 | orchestrator | Friday 29 August 2025 19:34:09 +0000 (0:00:00.339) 0:00:15.494 ********* 2025-08-29 19:36:08.282999 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:36:08.283016 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:36:08.283033 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:36:08.283050 | orchestrator | 2025-08-29 19:36:08.283066 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-08-29 19:36:08.283188 | orchestrator | Friday 29 August 2025 19:34:10 +0000 (0:00:00.520) 0:00:16.015 ********* 2025-08-29 19:36:08.283211 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--159b9ed4--8d08--5970--86a8--bd63a32380d6-osd--block--159b9ed4--8d08--5970--86a8--bd63a32380d6', 'dm-uuid-LVM-t4EDXhx402ZcE3z2KFlslw8sRuG7oKTbYGF2vURi18WcU2XTQv4lDqBSw9WnxGlH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 19:36:08.283231 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--338f76e1--8833--5be4--9943--9980bb5050e8-osd--block--338f76e1--8833--5be4--9943--9980bb5050e8', 'dm-uuid-LVM-iKHmePWtKLB5mUYv1rfhhXzkUTyAb52paGjQE2Orfi1AoP63rLVzgZAC6PWtzkkW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 19:36:08.283246 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:36:08.283260 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:36:08.283273 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:36:08.283296 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:36:08.283316 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:36:08.283366 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:36:08.283380 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:36:08.283394 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f946ce78--a8de--59ba--8bf5--045c292b6708-osd--block--f946ce78--a8de--59ba--8bf5--045c292b6708', 'dm-uuid-LVM-K5OisVE7MwbmJZfp6cO3yPv8VG5rk33hfjM3DCpRmgHNUXElf2VldbuyuNjKsvvv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 19:36:08.283408 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9d878572--29ec--5c6d--9e5c--f341c26bb0e1-osd--block--9d878572--29ec--5c6d--9e5c--f341c26bb0e1', 'dm-uuid-LVM-4DTR1TLZAfcyRf3R1a2hjz4yMdW41t7ej2slpNLAsLSSY1atWK0gONQetfswSQFR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 19:36:08.283421 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:36:08.283434 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:36:08.283488 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920', 'scsi-SQEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920-part1', 'scsi-SQEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920-part14', 'scsi-SQEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920-part15', 'scsi-SQEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920-part16', 'scsi-SQEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 19:36:08.283513 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:36:08.283526 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:36:08.283541 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--159b9ed4--8d08--5970--86a8--bd63a32380d6-osd--block--159b9ed4--8d08--5970--86a8--bd63a32380d6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-OlYmTf-Djfa-mdV8-A0hp-DTyx-3eeP-HiTeFQ', 'scsi-0QEMU_QEMU_HARDDISK_0a4c4485-d2ea-4599-9435-e606068873fe', 'scsi-SQEMU_QEMU_HARDDISK_0a4c4485-d2ea-4599-9435-e606068873fe'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 19:36:08.283555 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:36:08.283655 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--338f76e1--8833--5be4--9943--9980bb5050e8-osd--block--338f76e1--8833--5be4--9943--9980bb5050e8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-k9vR87-3oXQ-j2rI-QoQR-3p4H-kDuO-MKPLVR', 'scsi-0QEMU_QEMU_HARDDISK_21104e56-4cdf-49d9-91fd-13aff314e467', 'scsi-SQEMU_QEMU_HARDDISK_21104e56-4cdf-49d9-91fd-13aff314e467'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 19:36:08.283676 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:36:08.283724 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5bba166-d17c-451d-864c-9f74c60a90a3', 'scsi-SQEMU_QEMU_HARDDISK_e5bba166-d17c-451d-864c-9f74c60a90a3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 19:36:08.283739 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:36:08.283753 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-18-41-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 19:36:08.283766 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:36:08.283779 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:36:08.283792 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:36:08.283832 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd', 'scsi-SQEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 19:36:08.283848 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f946ce78--a8de--59ba--8bf5--045c292b6708-osd--block--f946ce78--a8de--59ba--8bf5--045c292b6708'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LLHbGs-EyvY-Y1o1-DvDv-Qp0y-rP5z-cuRGsu', 'scsi-0QEMU_QEMU_HARDDISK_5f9ac8f7-ded0-451e-9523-765e677fc5e6', 'scsi-SQEMU_QEMU_HARDDISK_5f9ac8f7-ded0-451e-9523-765e677fc5e6'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 19:36:08.283863 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--9d878572--29ec--5c6d--9e5c--f341c26bb0e1-osd--block--9d878572--29ec--5c6d--9e5c--f341c26bb0e1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9KieA2-8dIZ-S4XF-J4Dk-bz8s-vZ0D-4QydRe', 'scsi-0QEMU_QEMU_HARDDISK_b185a2fd-fb6c-4818-b874-6a265721bd32', 'scsi-SQEMU_QEMU_HARDDISK_b185a2fd-fb6c-4818-b874-6a265721bd32'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 19:36:08.283877 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a237b6df-fa80-49e2-8f79-019305f27c2d', 'scsi-SQEMU_QEMU_HARDDISK_a237b6df-fa80-49e2-8f79-019305f27c2d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 19:36:08.283897 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d29334ae--dac4--5c8b--9540--76ee60da5ca1-osd--block--d29334ae--dac4--5c8b--9540--76ee60da5ca1', 'dm-uuid-LVM-M7Pznd4vqBN3cdcw7Ka3CMD3cUWktfFuNeBE1p6IEPFdWlZwUMJkYq5Ucj5sGb8T'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 19:36:08.283915 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-18-41-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 19:36:08.283938 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--916dc454--8beb--55d0--b00a--22c96f7025a6-osd--block--916dc454--8beb--55d0--b00a--22c96f7025a6', 'dm-uuid-LVM-zXNl7P21uuZCQHc5oyNdERO4Q6IdPHUAph5oeYzpjdh5dsj1D2Cg3wgPNmI1KrtQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 19:36:08.283952 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:36:08.283965 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:36:08.283978 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:36:08.283992 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:36:08.284004 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:36:08.284017 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:36:08.284037 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:36:08.284050 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:36:08.284068 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 19:36:08.284091 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9', 'scsi-SQEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9-part1', 'scsi-SQEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9-part14', 'scsi-SQEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9-part15', 'scsi-SQEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9-part16', 'scsi-SQEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 19:36:08.284105 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d29334ae--dac4--5c8b--9540--76ee60da5ca1-osd--block--d29334ae--dac4--5c8b--9540--76ee60da5ca1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sBwl3V-PCyv-qHlY-COea-GaUo-WyS0-3jDzp6', 'scsi-0QEMU_QEMU_HARDDISK_2e78d78f-49e6-4011-b2d5-d2a9d6a8ed1c', 'scsi-SQEMU_QEMU_HARDDISK_2e78d78f-49e6-4011-b2d5-d2a9d6a8ed1c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 19:36:08.284127 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--916dc454--8beb--55d0--b00a--22c96f7025a6-osd--block--916dc454--8beb--55d0--b00a--22c96f7025a6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EHNyYv-2uKH-imfw-3hdf-kdGr-eLBb-oNVihd', 'scsi-0QEMU_QEMU_HARDDISK_4acdbd50-4373-4301-8b9f-e7658d09fe80', 'scsi-SQEMU_QEMU_HARDDISK_4acdbd50-4373-4301-8b9f-e7658d09fe80'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 19:36:08.284145 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53b41fa5-6534-4646-b4f8-3662ac98ea03', 'scsi-SQEMU_QEMU_HARDDISK_53b41fa5-6534-4646-b4f8-3662ac98ea03'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 19:36:08.284165 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-18-41-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 19:36:08.284178 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:36:08.284191 | orchestrator | 2025-08-29 19:36:08.284204 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-08-29 19:36:08.284218 | orchestrator | Friday 29 August 2025 19:34:10 +0000 (0:00:00.558) 0:00:16.573 ********* 2025-08-29 19:36:08.284232 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--159b9ed4--8d08--5970--86a8--bd63a32380d6-osd--block--159b9ed4--8d08--5970--86a8--bd63a32380d6', 'dm-uuid-LVM-t4EDXhx402ZcE3z2KFlslw8sRuG7oKTbYGF2vURi18WcU2XTQv4lDqBSw9WnxGlH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.284247 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--338f76e1--8833--5be4--9943--9980bb5050e8-osd--block--338f76e1--8833--5be4--9943--9980bb5050e8', 'dm-uuid-LVM-iKHmePWtKLB5mUYv1rfhhXzkUTyAb52paGjQE2Orfi1AoP63rLVzgZAC6PWtzkkW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.284268 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.284282 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.284300 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.284319 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.284333 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.284346 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.284366 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.284380 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.284397 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f946ce78--a8de--59ba--8bf5--045c292b6708-osd--block--f946ce78--a8de--59ba--8bf5--045c292b6708', 'dm-uuid-LVM-K5OisVE7MwbmJZfp6cO3yPv8VG5rk33hfjM3DCpRmgHNUXElf2VldbuyuNjKsvvv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.284420 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920', 'scsi-SQEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920-part1', 'scsi-SQEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920-part14', 'scsi-SQEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920-part15', 'scsi-SQEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920-part16', 'scsi-SQEMU_QEMU_HARDDISK_02ff1d4e-2410-4b7f-a7fd-7ee241f95920-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.284441 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9d878572--29ec--5c6d--9e5c--f341c26bb0e1-osd--block--9d878572--29ec--5c6d--9e5c--f341c26bb0e1', 'dm-uuid-LVM-4DTR1TLZAfcyRf3R1a2hjz4yMdW41t7ej2slpNLAsLSSY1atWK0gONQetfswSQFR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.284460 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--159b9ed4--8d08--5970--86a8--bd63a32380d6-osd--block--159b9ed4--8d08--5970--86a8--bd63a32380d6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-OlYmTf-Djfa-mdV8-A0hp-DTyx-3eeP-HiTeFQ', 'scsi-0QEMU_QEMU_HARDDISK_0a4c4485-d2ea-4599-9435-e606068873fe', 'scsi-SQEMU_QEMU_HARDDISK_0a4c4485-d2ea-4599-9435-e606068873fe'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.284475 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.284496 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--338f76e1--8833--5be4--9943--9980bb5050e8-osd--block--338f76e1--8833--5be4--9943--9980bb5050e8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-k9vR87-3oXQ-j2rI-QoQR-3p4H-kDuO-MKPLVR', 'scsi-0QEMU_QEMU_HARDDISK_21104e56-4cdf-49d9-91fd-13aff314e467', 'scsi-SQEMU_QEMU_HARDDISK_21104e56-4cdf-49d9-91fd-13aff314e467'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.284509 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.284531 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5bba166-d17c-451d-864c-9f74c60a90a3', 'scsi-SQEMU_QEMU_HARDDISK_e5bba166-d17c-451d-864c-9f74c60a90a3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.284544 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.284568 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-18-41-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.284589 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.284602 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.284658 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.284673 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.284686 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.284716 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd', 'scsi-SQEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_20dc7da4-7f8b-4c6d-bac0-971cfc3b87cd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.284737 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f946ce78--a8de--59ba--8bf5--045c292b6708-osd--block--f946ce78--a8de--59ba--8bf5--045c292b6708'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LLHbGs-EyvY-Y1o1-DvDv-Qp0y-rP5z-cuRGsu', 'scsi-0QEMU_QEMU_HARDDISK_5f9ac8f7-ded0-451e-9523-765e677fc5e6', 'scsi-SQEMU_QEMU_HARDDISK_5f9ac8f7-ded0-451e-9523-765e677fc5e6'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.284752 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--9d878572--29ec--5c6d--9e5c--f341c26bb0e1-osd--block--9d878572--29ec--5c6d--9e5c--f341c26bb0e1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9KieA2-8dIZ-S4XF-J4Dk-bz8s-vZ0D-4QydRe', 'scsi-0QEMU_QEMU_HARDDISK_b185a2fd-fb6c-4818-b874-6a265721bd32', 'scsi-SQEMU_QEMU_HARDDISK_b185a2fd-fb6c-4818-b874-6a265721bd32'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.284765 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:36:08.284783 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a237b6df-fa80-49e2-8f79-019305f27c2d', 'scsi-SQEMU_QEMU_HARDDISK_a237b6df-fa80-49e2-8f79-019305f27c2d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.284803 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-18-41-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.284817 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:36:08.284830 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d29334ae--dac4--5c8b--9540--76ee60da5ca1-osd--block--d29334ae--dac4--5c8b--9540--76ee60da5ca1', 'dm-uuid-LVM-M7Pznd4vqBN3cdcw7Ka3CMD3cUWktfFuNeBE1p6IEPFdWlZwUMJkYq5Ucj5sGb8T'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.284851 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--916dc454--8beb--55d0--b00a--22c96f7025a6-osd--block--916dc454--8beb--55d0--b00a--22c96f7025a6', 'dm-uuid-LVM-zXNl7P21uuZCQHc5oyNdERO4Q6IdPHUAph5oeYzpjdh5dsj1D2Cg3wgPNmI1KrtQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.284864 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.284878 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.284896 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.284917 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.284930 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.284953 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.284967 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.284980 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.285008 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9', 'scsi-SQEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9-part1', 'scsi-SQEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9-part14', 'scsi-SQEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9-part15', 'scsi-SQEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9-part16', 'scsi-SQEMU_QEMU_HARDDISK_593879a8-1213-4abc-9241-c8a1c7d52cf9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.285031 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--d29334ae--dac4--5c8b--9540--76ee60da5ca1-osd--block--d29334ae--dac4--5c8b--9540--76ee60da5ca1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sBwl3V-PCyv-qHlY-COea-GaUo-WyS0-3jDzp6', 'scsi-0QEMU_QEMU_HARDDISK_2e78d78f-49e6-4011-b2d5-d2a9d6a8ed1c', 'scsi-SQEMU_QEMU_HARDDISK_2e78d78f-49e6-4011-b2d5-d2a9d6a8ed1c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.285045 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--916dc454--8beb--55d0--b00a--22c96f7025a6-osd--block--916dc454--8beb--55d0--b00a--22c96f7025a6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EHNyYv-2uKH-imfw-3hdf-kdGr-eLBb-oNVihd', 'scsi-0QEMU_QEMU_HARDDISK_4acdbd50-4373-4301-8b9f-e7658d09fe80', 'scsi-SQEMU_QEMU_HARDDISK_4acdbd50-4373-4301-8b9f-e7658d09fe80'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.285063 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53b41fa5-6534-4646-b4f8-3662ac98ea03', 'scsi-SQEMU_QEMU_HARDDISK_53b41fa5-6534-4646-b4f8-3662ac98ea03'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.285083 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-18-41-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 19:36:08.285103 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:36:08.285117 | orchestrator | 2025-08-29 19:36:08.285130 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-08-29 19:36:08.285143 | orchestrator | Friday 29 August 2025 19:34:11 +0000 (0:00:00.658) 0:00:17.231 ********* 2025-08-29 19:36:08.285155 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:36:08.285169 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:36:08.285182 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:36:08.285194 | orchestrator | 2025-08-29 19:36:08.285207 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-08-29 19:36:08.285220 | orchestrator | Friday 29 August 2025 19:34:12 +0000 (0:00:00.639) 0:00:17.871 ********* 2025-08-29 19:36:08.285232 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:36:08.285245 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:36:08.285258 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:36:08.285271 | orchestrator | 2025-08-29 19:36:08.285284 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-08-29 19:36:08.285297 | orchestrator | Friday 29 August 2025 19:34:12 +0000 (0:00:00.407) 0:00:18.278 ********* 2025-08-29 19:36:08.285310 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:36:08.285324 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:36:08.285337 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:36:08.285349 | orchestrator | 2025-08-29 19:36:08.285361 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-08-29 19:36:08.285374 | orchestrator | Friday 29 August 2025 19:34:13 +0000 (0:00:00.602) 0:00:18.881 ********* 2025-08-29 19:36:08.285387 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:36:08.285401 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:36:08.285413 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:36:08.285426 | orchestrator | 2025-08-29 19:36:08.285439 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-08-29 19:36:08.285453 | orchestrator | Friday 29 August 2025 19:34:13 +0000 (0:00:00.272) 0:00:19.154 ********* 2025-08-29 19:36:08.285466 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:36:08.285479 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:36:08.285492 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:36:08.285505 | orchestrator | 2025-08-29 19:36:08.285517 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-08-29 19:36:08.285530 | orchestrator | Friday 29 August 2025 19:34:13 +0000 (0:00:00.370) 0:00:19.524 ********* 2025-08-29 19:36:08.285544 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:36:08.285556 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:36:08.285569 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:36:08.285582 | orchestrator | 2025-08-29 19:36:08.285595 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-08-29 19:36:08.285608 | orchestrator | Friday 29 August 2025 19:34:14 +0000 (0:00:00.434) 0:00:19.959 ********* 2025-08-29 19:36:08.285645 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-08-29 19:36:08.285659 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-08-29 19:36:08.285671 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-08-29 19:36:08.285684 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-08-29 19:36:08.285697 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-08-29 19:36:08.285710 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-08-29 19:36:08.285723 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-08-29 19:36:08.285735 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-08-29 19:36:08.285748 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-08-29 19:36:08.285761 | orchestrator | 2025-08-29 19:36:08.285774 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-08-29 19:36:08.285797 | orchestrator | Friday 29 August 2025 19:34:15 +0000 (0:00:00.789) 0:00:20.748 ********* 2025-08-29 19:36:08.285810 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-08-29 19:36:08.285824 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-08-29 19:36:08.285837 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-08-29 19:36:08.285851 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:36:08.285864 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-08-29 19:36:08.285878 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-08-29 19:36:08.285891 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-08-29 19:36:08.285904 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:36:08.285918 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-08-29 19:36:08.285931 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-08-29 19:36:08.285953 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-08-29 19:36:08.285967 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:36:08.285980 | orchestrator | 2025-08-29 19:36:08.285993 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-08-29 19:36:08.286006 | orchestrator | Friday 29 August 2025 19:34:15 +0000 (0:00:00.354) 0:00:21.103 ********* 2025-08-29 19:36:08.286059 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:36:08.286073 | orchestrator | 2025-08-29 19:36:08.286087 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-08-29 19:36:08.286101 | orchestrator | Friday 29 August 2025 19:34:16 +0000 (0:00:00.631) 0:00:21.735 ********* 2025-08-29 19:36:08.286115 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:36:08.286129 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:36:08.286142 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:36:08.286155 | orchestrator | 2025-08-29 19:36:08.286183 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-08-29 19:36:08.286196 | orchestrator | Friday 29 August 2025 19:34:16 +0000 (0:00:00.277) 0:00:22.013 ********* 2025-08-29 19:36:08.286210 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:36:08.286223 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:36:08.286234 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:36:08.286246 | orchestrator | 2025-08-29 19:36:08.286257 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-08-29 19:36:08.286268 | orchestrator | Friday 29 August 2025 19:34:16 +0000 (0:00:00.264) 0:00:22.277 ********* 2025-08-29 19:36:08.286280 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:36:08.286291 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:36:08.286302 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:36:08.286313 | orchestrator | 2025-08-29 19:36:08.286325 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-08-29 19:36:08.286336 | orchestrator | Friday 29 August 2025 19:34:16 +0000 (0:00:00.297) 0:00:22.574 ********* 2025-08-29 19:36:08.286348 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:36:08.286359 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:36:08.286370 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:36:08.286381 | orchestrator | 2025-08-29 19:36:08.286392 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-08-29 19:36:08.286402 | orchestrator | Friday 29 August 2025 19:34:17 +0000 (0:00:00.493) 0:00:23.067 ********* 2025-08-29 19:36:08.286414 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 19:36:08.286424 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 19:36:08.286434 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 19:36:08.286444 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:36:08.286455 | orchestrator | 2025-08-29 19:36:08.286466 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-08-29 19:36:08.286487 | orchestrator | Friday 29 August 2025 19:34:17 +0000 (0:00:00.329) 0:00:23.397 ********* 2025-08-29 19:36:08.286498 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 19:36:08.286509 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 19:36:08.286520 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 19:36:08.286530 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:36:08.286541 | orchestrator | 2025-08-29 19:36:08.286551 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-08-29 19:36:08.286562 | orchestrator | Friday 29 August 2025 19:34:18 +0000 (0:00:00.421) 0:00:23.818 ********* 2025-08-29 19:36:08.286572 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 19:36:08.286583 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 19:36:08.286594 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 19:36:08.286604 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:36:08.286614 | orchestrator | 2025-08-29 19:36:08.286653 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-08-29 19:36:08.286665 | orchestrator | Friday 29 August 2025 19:34:18 +0000 (0:00:00.296) 0:00:24.115 ********* 2025-08-29 19:36:08.286676 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:36:08.286686 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:36:08.286697 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:36:08.286708 | orchestrator | 2025-08-29 19:36:08.286720 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-08-29 19:36:08.286731 | orchestrator | Friday 29 August 2025 19:34:18 +0000 (0:00:00.281) 0:00:24.396 ********* 2025-08-29 19:36:08.286742 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-08-29 19:36:08.286752 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-08-29 19:36:08.286763 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-08-29 19:36:08.286774 | orchestrator | 2025-08-29 19:36:08.286786 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-08-29 19:36:08.286797 | orchestrator | Friday 29 August 2025 19:34:19 +0000 (0:00:00.509) 0:00:24.906 ********* 2025-08-29 19:36:08.286808 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 19:36:08.286819 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 19:36:08.286830 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 19:36:08.286841 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-08-29 19:36:08.286852 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-08-29 19:36:08.286863 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-08-29 19:36:08.286874 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-08-29 19:36:08.286885 | orchestrator | 2025-08-29 19:36:08.286896 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-08-29 19:36:08.286920 | orchestrator | Friday 29 August 2025 19:34:20 +0000 (0:00:00.860) 0:00:25.766 ********* 2025-08-29 19:36:08.286931 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 19:36:08.286942 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 19:36:08.286953 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 19:36:08.286963 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-08-29 19:36:08.286975 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-08-29 19:36:08.286985 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-08-29 19:36:08.286996 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-08-29 19:36:08.287008 | orchestrator | 2025-08-29 19:36:08.287034 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-08-29 19:36:08.287046 | orchestrator | Friday 29 August 2025 19:34:21 +0000 (0:00:01.654) 0:00:27.421 ********* 2025-08-29 19:36:08.287057 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:36:08.287068 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:36:08.287079 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-08-29 19:36:08.287090 | orchestrator | 2025-08-29 19:36:08.287101 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-08-29 19:36:08.287112 | orchestrator | Friday 29 August 2025 19:34:22 +0000 (0:00:00.359) 0:00:27.781 ********* 2025-08-29 19:36:08.287125 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 19:36:08.287138 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 19:36:08.287150 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 19:36:08.287161 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 19:36:08.287172 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 19:36:08.287183 | orchestrator | 2025-08-29 19:36:08.287193 | orchestrator | TASK [generate keys] *********************************************************** 2025-08-29 19:36:08.287204 | orchestrator | Friday 29 August 2025 19:35:11 +0000 (0:00:48.996) 0:01:16.777 ********* 2025-08-29 19:36:08.287215 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 19:36:08.287225 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 19:36:08.287236 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 19:36:08.287247 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 19:36:08.287258 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 19:36:08.287269 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 19:36:08.287279 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-08-29 19:36:08.287290 | orchestrator | 2025-08-29 19:36:08.287301 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-08-29 19:36:08.287312 | orchestrator | Friday 29 August 2025 19:35:35 +0000 (0:00:24.476) 0:01:41.254 ********* 2025-08-29 19:36:08.287323 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 19:36:08.287334 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 19:36:08.287345 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 19:36:08.287356 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 19:36:08.287367 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 19:36:08.287384 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 19:36:08.287395 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 19:36:08.287407 | orchestrator | 2025-08-29 19:36:08.287422 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-08-29 19:36:08.287434 | orchestrator | Friday 29 August 2025 19:35:47 +0000 (0:00:11.891) 0:01:53.145 ********* 2025-08-29 19:36:08.287445 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 19:36:08.287456 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 19:36:08.287467 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 19:36:08.287478 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 19:36:08.287489 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 19:36:08.287500 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 19:36:08.287517 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 19:36:08.287528 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 19:36:08.287539 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 19:36:08.287550 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 19:36:08.287561 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 19:36:08.287572 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 19:36:08.287583 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 19:36:08.287595 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 19:36:08.287606 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 19:36:08.287638 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 19:36:08.287651 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 19:36:08.287662 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 19:36:08.287673 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-08-29 19:36:08.287684 | orchestrator | 2025-08-29 19:36:08.287695 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:36:08.287706 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-08-29 19:36:08.287719 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-08-29 19:36:08.287731 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-08-29 19:36:08.287741 | orchestrator | 2025-08-29 19:36:08.287752 | orchestrator | 2025-08-29 19:36:08.287762 | orchestrator | 2025-08-29 19:36:08.287773 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:36:08.287783 | orchestrator | Friday 29 August 2025 19:36:05 +0000 (0:00:17.735) 0:02:10.881 ********* 2025-08-29 19:36:08.287794 | orchestrator | =============================================================================== 2025-08-29 19:36:08.287805 | orchestrator | create openstack pool(s) ----------------------------------------------- 49.00s 2025-08-29 19:36:08.287816 | orchestrator | generate keys ---------------------------------------------------------- 24.48s 2025-08-29 19:36:08.287826 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.74s 2025-08-29 19:36:08.287846 | orchestrator | get keys from monitors ------------------------------------------------- 11.89s 2025-08-29 19:36:08.287858 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.14s 2025-08-29 19:36:08.287869 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.68s 2025-08-29 19:36:08.287880 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.65s 2025-08-29 19:36:08.287890 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.86s 2025-08-29 19:36:08.287902 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.84s 2025-08-29 19:36:08.287912 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.79s 2025-08-29 19:36:08.287923 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.79s 2025-08-29 19:36:08.287934 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.66s 2025-08-29 19:36:08.287945 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.66s 2025-08-29 19:36:08.287956 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.65s 2025-08-29 19:36:08.287967 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.64s 2025-08-29 19:36:08.287978 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.64s 2025-08-29 19:36:08.287989 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.63s 2025-08-29 19:36:08.288000 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.60s 2025-08-29 19:36:08.288010 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.56s 2025-08-29 19:36:08.288026 | orchestrator | ceph-facts : Set_fact build devices from resolved symlinks -------------- 0.52s 2025-08-29 19:36:08.288037 | orchestrator | 2025-08-29 19:36:08 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:36:08.288048 | orchestrator | 2025-08-29 19:36:08 | INFO  | Task 7d63ff42-d24e-4450-b1d6-afe5a7bf14c4 is in state STARTED 2025-08-29 19:36:08.288059 | orchestrator | 2025-08-29 19:36:08 | INFO  | Task 35694d26-4aa2-44f1-8609-b635bf9e0bcd is in state STARTED 2025-08-29 19:36:08.288069 | orchestrator | 2025-08-29 19:36:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:36:11.345703 | orchestrator | 2025-08-29 19:36:11 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:36:11.348049 | orchestrator | 2025-08-29 19:36:11 | INFO  | Task 7d63ff42-d24e-4450-b1d6-afe5a7bf14c4 is in state STARTED 2025-08-29 19:36:11.351611 | orchestrator | 2025-08-29 19:36:11 | INFO  | Task 35694d26-4aa2-44f1-8609-b635bf9e0bcd is in state STARTED 2025-08-29 19:36:11.352031 | orchestrator | 2025-08-29 19:36:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:36:14.397336 | orchestrator | 2025-08-29 19:36:14 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:36:14.398727 | orchestrator | 2025-08-29 19:36:14 | INFO  | Task 7d63ff42-d24e-4450-b1d6-afe5a7bf14c4 is in state STARTED 2025-08-29 19:36:14.400639 | orchestrator | 2025-08-29 19:36:14 | INFO  | Task 35694d26-4aa2-44f1-8609-b635bf9e0bcd is in state STARTED 2025-08-29 19:36:14.401059 | orchestrator | 2025-08-29 19:36:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:36:17.451316 | orchestrator | 2025-08-29 19:36:17 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:36:17.452553 | orchestrator | 2025-08-29 19:36:17 | INFO  | Task 7d63ff42-d24e-4450-b1d6-afe5a7bf14c4 is in state STARTED 2025-08-29 19:36:17.454566 | orchestrator | 2025-08-29 19:36:17 | INFO  | Task 35694d26-4aa2-44f1-8609-b635bf9e0bcd is in state STARTED 2025-08-29 19:36:17.454596 | orchestrator | 2025-08-29 19:36:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:36:20.503057 | orchestrator | 2025-08-29 19:36:20 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:36:20.505798 | orchestrator | 2025-08-29 19:36:20 | INFO  | Task 7d63ff42-d24e-4450-b1d6-afe5a7bf14c4 is in state STARTED 2025-08-29 19:36:20.507912 | orchestrator | 2025-08-29 19:36:20 | INFO  | Task 35694d26-4aa2-44f1-8609-b635bf9e0bcd is in state STARTED 2025-08-29 19:36:20.508259 | orchestrator | 2025-08-29 19:36:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:36:23.558667 | orchestrator | 2025-08-29 19:36:23 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:36:23.560165 | orchestrator | 2025-08-29 19:36:23 | INFO  | Task 7d63ff42-d24e-4450-b1d6-afe5a7bf14c4 is in state STARTED 2025-08-29 19:36:23.561173 | orchestrator | 2025-08-29 19:36:23 | INFO  | Task 35694d26-4aa2-44f1-8609-b635bf9e0bcd is in state STARTED 2025-08-29 19:36:23.561219 | orchestrator | 2025-08-29 19:36:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:36:26.613767 | orchestrator | 2025-08-29 19:36:26 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:36:26.614667 | orchestrator | 2025-08-29 19:36:26 | INFO  | Task 7d63ff42-d24e-4450-b1d6-afe5a7bf14c4 is in state STARTED 2025-08-29 19:36:26.616513 | orchestrator | 2025-08-29 19:36:26 | INFO  | Task 35694d26-4aa2-44f1-8609-b635bf9e0bcd is in state STARTED 2025-08-29 19:36:26.616583 | orchestrator | 2025-08-29 19:36:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:36:29.660298 | orchestrator | 2025-08-29 19:36:29 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:36:29.662496 | orchestrator | 2025-08-29 19:36:29 | INFO  | Task 7d63ff42-d24e-4450-b1d6-afe5a7bf14c4 is in state STARTED 2025-08-29 19:36:29.665720 | orchestrator | 2025-08-29 19:36:29 | INFO  | Task 35694d26-4aa2-44f1-8609-b635bf9e0bcd is in state STARTED 2025-08-29 19:36:29.665798 | orchestrator | 2025-08-29 19:36:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:36:32.721305 | orchestrator | 2025-08-29 19:36:32 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:36:32.722224 | orchestrator | 2025-08-29 19:36:32 | INFO  | Task 7d63ff42-d24e-4450-b1d6-afe5a7bf14c4 is in state STARTED 2025-08-29 19:36:32.723905 | orchestrator | 2025-08-29 19:36:32 | INFO  | Task 35694d26-4aa2-44f1-8609-b635bf9e0bcd is in state STARTED 2025-08-29 19:36:32.723965 | orchestrator | 2025-08-29 19:36:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:36:35.786716 | orchestrator | 2025-08-29 19:36:35 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:36:35.787871 | orchestrator | 2025-08-29 19:36:35 | INFO  | Task 7d63ff42-d24e-4450-b1d6-afe5a7bf14c4 is in state SUCCESS 2025-08-29 19:36:35.789439 | orchestrator | 2025-08-29 19:36:35 | INFO  | Task 35694d26-4aa2-44f1-8609-b635bf9e0bcd is in state STARTED 2025-08-29 19:36:35.789546 | orchestrator | 2025-08-29 19:36:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:36:38.837447 | orchestrator | 2025-08-29 19:36:38 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:36:38.839436 | orchestrator | 2025-08-29 19:36:38 | INFO  | Task 6fdccf41-ec88-446d-aef8-3fc0a25a8089 is in state STARTED 2025-08-29 19:36:38.841304 | orchestrator | 2025-08-29 19:36:38 | INFO  | Task 35694d26-4aa2-44f1-8609-b635bf9e0bcd is in state STARTED 2025-08-29 19:36:38.841349 | orchestrator | 2025-08-29 19:36:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:36:41.880675 | orchestrator | 2025-08-29 19:36:41 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:36:41.882167 | orchestrator | 2025-08-29 19:36:41 | INFO  | Task 6fdccf41-ec88-446d-aef8-3fc0a25a8089 is in state STARTED 2025-08-29 19:36:41.886179 | orchestrator | 2025-08-29 19:36:41 | INFO  | Task 35694d26-4aa2-44f1-8609-b635bf9e0bcd is in state STARTED 2025-08-29 19:36:41.886246 | orchestrator | 2025-08-29 19:36:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:36:44.930261 | orchestrator | 2025-08-29 19:36:44 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:36:44.933480 | orchestrator | 2025-08-29 19:36:44 | INFO  | Task 6fdccf41-ec88-446d-aef8-3fc0a25a8089 is in state STARTED 2025-08-29 19:36:44.938679 | orchestrator | 2025-08-29 19:36:44 | INFO  | Task 35694d26-4aa2-44f1-8609-b635bf9e0bcd is in state SUCCESS 2025-08-29 19:36:44.939879 | orchestrator | 2025-08-29 19:36:44.939930 | orchestrator | 2025-08-29 19:36:44.939938 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-08-29 19:36:44.939944 | orchestrator | 2025-08-29 19:36:44.939950 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-08-29 19:36:44.939955 | orchestrator | Friday 29 August 2025 19:36:09 +0000 (0:00:00.172) 0:00:00.172 ********* 2025-08-29 19:36:44.939961 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-08-29 19:36:44.939967 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-08-29 19:36:44.939973 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-08-29 19:36:44.939978 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 19:36:44.939983 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-08-29 19:36:44.939988 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-08-29 19:36:44.939993 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-08-29 19:36:44.939998 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-08-29 19:36:44.940002 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-08-29 19:36:44.940007 | orchestrator | 2025-08-29 19:36:44.940012 | orchestrator | TASK [Create share directory] ************************************************** 2025-08-29 19:36:44.940017 | orchestrator | Friday 29 August 2025 19:36:13 +0000 (0:00:04.098) 0:00:04.271 ********* 2025-08-29 19:36:44.940023 | orchestrator | changed: [testbed-manager -> localhost] 2025-08-29 19:36:44.940028 | orchestrator | 2025-08-29 19:36:44.940033 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-08-29 19:36:44.940038 | orchestrator | Friday 29 August 2025 19:36:14 +0000 (0:00:01.027) 0:00:05.298 ********* 2025-08-29 19:36:44.940043 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-08-29 19:36:44.940048 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-08-29 19:36:44.940053 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-08-29 19:36:44.940058 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 19:36:44.940063 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-08-29 19:36:44.940068 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-08-29 19:36:44.940073 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-08-29 19:36:44.940078 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-08-29 19:36:44.940242 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-08-29 19:36:44.940252 | orchestrator | 2025-08-29 19:36:44.940258 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-08-29 19:36:44.940263 | orchestrator | Friday 29 August 2025 19:36:28 +0000 (0:00:13.261) 0:00:18.560 ********* 2025-08-29 19:36:44.940269 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-08-29 19:36:44.940274 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-08-29 19:36:44.940280 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-08-29 19:36:44.940285 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 19:36:44.940290 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-08-29 19:36:44.940296 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-08-29 19:36:44.940301 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-08-29 19:36:44.940307 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-08-29 19:36:44.940312 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-08-29 19:36:44.940317 | orchestrator | 2025-08-29 19:36:44.940323 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:36:44.940328 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:36:44.940334 | orchestrator | 2025-08-29 19:36:44.940339 | orchestrator | 2025-08-29 19:36:44.940345 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:36:44.940350 | orchestrator | Friday 29 August 2025 19:36:34 +0000 (0:00:06.514) 0:00:25.075 ********* 2025-08-29 19:36:44.940356 | orchestrator | =============================================================================== 2025-08-29 19:36:44.940361 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.26s 2025-08-29 19:36:44.940366 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.51s 2025-08-29 19:36:44.940371 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.10s 2025-08-29 19:36:44.940377 | orchestrator | Create share directory -------------------------------------------------- 1.03s 2025-08-29 19:36:44.940382 | orchestrator | 2025-08-29 19:36:44.940387 | orchestrator | 2025-08-29 19:36:44.940392 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 19:36:44.940398 | orchestrator | 2025-08-29 19:36:44.940412 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 19:36:44.940417 | orchestrator | Friday 29 August 2025 19:34:55 +0000 (0:00:00.264) 0:00:00.264 ********* 2025-08-29 19:36:44.940422 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:36:44.940428 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:36:44.940433 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:36:44.940438 | orchestrator | 2025-08-29 19:36:44.940444 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 19:36:44.940449 | orchestrator | Friday 29 August 2025 19:34:55 +0000 (0:00:00.310) 0:00:00.575 ********* 2025-08-29 19:36:44.940455 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-08-29 19:36:44.940461 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-08-29 19:36:44.940467 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-08-29 19:36:44.940472 | orchestrator | 2025-08-29 19:36:44.940477 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-08-29 19:36:44.940482 | orchestrator | 2025-08-29 19:36:44.940488 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-08-29 19:36:44.940494 | orchestrator | Friday 29 August 2025 19:34:55 +0000 (0:00:00.394) 0:00:00.969 ********* 2025-08-29 19:36:44.940499 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:36:44.940510 | orchestrator | 2025-08-29 19:36:44.940515 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-08-29 19:36:44.940521 | orchestrator | Friday 29 August 2025 19:34:56 +0000 (0:00:00.527) 0:00:01.497 ********* 2025-08-29 19:36:44.940534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 19:36:44.940550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 19:36:44.940565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 19:36:44.940624 | orchestrator | 2025-08-29 19:36:44.940630 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-08-29 19:36:44.940635 | orchestrator | Friday 29 August 2025 19:34:57 +0000 (0:00:01.193) 0:00:02.691 ********* 2025-08-29 19:36:44.940653 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:36:44.940658 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:36:44.940663 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:36:44.940668 | orchestrator | 2025-08-29 19:36:44.940673 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-08-29 19:36:44.940678 | orchestrator | Friday 29 August 2025 19:34:58 +0000 (0:00:00.485) 0:00:03.176 ********* 2025-08-29 19:36:44.940683 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-08-29 19:36:44.940688 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-08-29 19:36:44.940696 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-08-29 19:36:44.940701 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-08-29 19:36:44.940706 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-08-29 19:36:44.940711 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-08-29 19:36:44.940721 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-08-29 19:36:44.940726 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-08-29 19:36:44.940731 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-08-29 19:36:44.940736 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-08-29 19:36:44.940741 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-08-29 19:36:44.940746 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-08-29 19:36:44.940750 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-08-29 19:36:44.940755 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-08-29 19:36:44.940760 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-08-29 19:36:44.940765 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-08-29 19:36:44.940770 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-08-29 19:36:44.940775 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-08-29 19:36:44.940779 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-08-29 19:36:44.940784 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-08-29 19:36:44.940789 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-08-29 19:36:44.940794 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-08-29 19:36:44.940799 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-08-29 19:36:44.940804 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-08-29 19:36:44.940810 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-08-29 19:36:44.940816 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-08-29 19:36:44.940825 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-08-29 19:36:44.940830 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-08-29 19:36:44.940835 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-08-29 19:36:44.940840 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-08-29 19:36:44.940845 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-08-29 19:36:44.940849 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-08-29 19:36:44.940854 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-08-29 19:36:44.940859 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-08-29 19:36:44.940864 | orchestrator | 2025-08-29 19:36:44.940869 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 19:36:44.940879 | orchestrator | Friday 29 August 2025 19:34:58 +0000 (0:00:00.753) 0:00:03.930 ********* 2025-08-29 19:36:44.940885 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:36:44.940890 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:36:44.940895 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:36:44.940901 | orchestrator | 2025-08-29 19:36:44.940906 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 19:36:44.940912 | orchestrator | Friday 29 August 2025 19:34:59 +0000 (0:00:00.300) 0:00:04.231 ********* 2025-08-29 19:36:44.940917 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:36:44.940923 | orchestrator | 2025-08-29 19:36:44.940929 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 19:36:44.940937 | orchestrator | Friday 29 August 2025 19:34:59 +0000 (0:00:00.128) 0:00:04.360 ********* 2025-08-29 19:36:44.940943 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:36:44.940949 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:36:44.940954 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:36:44.940960 | orchestrator | 2025-08-29 19:36:44.940965 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 19:36:44.940971 | orchestrator | Friday 29 August 2025 19:34:59 +0000 (0:00:00.464) 0:00:04.824 ********* 2025-08-29 19:36:44.940976 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:36:44.940982 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:36:44.940987 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:36:44.940993 | orchestrator | 2025-08-29 19:36:44.940998 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 19:36:44.941004 | orchestrator | Friday 29 August 2025 19:35:00 +0000 (0:00:00.322) 0:00:05.146 ********* 2025-08-29 19:36:44.941010 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:36:44.941015 | orchestrator | 2025-08-29 19:36:44.941021 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 19:36:44.941026 | orchestrator | Friday 29 August 2025 19:35:00 +0000 (0:00:00.148) 0:00:05.295 ********* 2025-08-29 19:36:44.941032 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:36:44.941037 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:36:44.941043 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:36:44.941049 | orchestrator | 2025-08-29 19:36:44.941054 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 19:36:44.941059 | orchestrator | Friday 29 August 2025 19:35:00 +0000 (0:00:00.271) 0:00:05.567 ********* 2025-08-29 19:36:44.941065 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:36:44.941070 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:36:44.941076 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:36:44.941081 | orchestrator | 2025-08-29 19:36:44.941087 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 19:36:44.941092 | orchestrator | Friday 29 August 2025 19:35:00 +0000 (0:00:00.396) 0:00:05.963 ********* 2025-08-29 19:36:44.941098 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:36:44.941103 | orchestrator | 2025-08-29 19:36:44.941109 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 19:36:44.941114 | orchestrator | Friday 29 August 2025 19:35:01 +0000 (0:00:00.125) 0:00:06.089 ********* 2025-08-29 19:36:44.941120 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:36:44.941125 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:36:44.941131 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:36:44.941136 | orchestrator | 2025-08-29 19:36:44.941141 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 19:36:44.941147 | orchestrator | Friday 29 August 2025 19:35:01 +0000 (0:00:00.578) 0:00:06.667 ********* 2025-08-29 19:36:44.941153 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:36:44.941158 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:36:44.941163 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:36:44.941169 | orchestrator | 2025-08-29 19:36:44.941174 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 19:36:44.941184 | orchestrator | Friday 29 August 2025 19:35:01 +0000 (0:00:00.319) 0:00:06.987 ********* 2025-08-29 19:36:44.941189 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:36:44.941195 | orchestrator | 2025-08-29 19:36:44.941201 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 19:36:44.941206 | orchestrator | Friday 29 August 2025 19:35:02 +0000 (0:00:00.147) 0:00:07.134 ********* 2025-08-29 19:36:44.941212 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:36:44.941217 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:36:44.941225 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:36:44.941231 | orchestrator | 2025-08-29 19:36:44.941236 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 19:36:44.941241 | orchestrator | Friday 29 August 2025 19:35:02 +0000 (0:00:00.338) 0:00:07.473 ********* 2025-08-29 19:36:44.941246 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:36:44.941251 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:36:44.941256 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:36:44.941261 | orchestrator | 2025-08-29 19:36:44.941266 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 19:36:44.941270 | orchestrator | Friday 29 August 2025 19:35:02 +0000 (0:00:00.330) 0:00:07.803 ********* 2025-08-29 19:36:44.941275 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:36:44.941280 | orchestrator | 2025-08-29 19:36:44.941285 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 19:36:44.941290 | orchestrator | Friday 29 August 2025 19:35:03 +0000 (0:00:00.345) 0:00:08.148 ********* 2025-08-29 19:36:44.941295 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:36:44.941300 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:36:44.941305 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:36:44.941309 | orchestrator | 2025-08-29 19:36:44.941314 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 19:36:44.941319 | orchestrator | Friday 29 August 2025 19:35:03 +0000 (0:00:00.400) 0:00:08.549 ********* 2025-08-29 19:36:44.941324 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:36:44.941329 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:36:44.941334 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:36:44.941339 | orchestrator | 2025-08-29 19:36:44.941344 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 19:36:44.941349 | orchestrator | Friday 29 August 2025 19:35:03 +0000 (0:00:00.302) 0:00:08.852 ********* 2025-08-29 19:36:44.941354 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:36:44.941358 | orchestrator | 2025-08-29 19:36:44.941363 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 19:36:44.941368 | orchestrator | Friday 29 August 2025 19:35:03 +0000 (0:00:00.127) 0:00:08.979 ********* 2025-08-29 19:36:44.941373 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:36:44.941378 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:36:44.941383 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:36:44.941388 | orchestrator | 2025-08-29 19:36:44.941393 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 19:36:44.941398 | orchestrator | Friday 29 August 2025 19:35:04 +0000 (0:00:00.280) 0:00:09.259 ********* 2025-08-29 19:36:44.941403 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:36:44.941408 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:36:44.941413 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:36:44.941417 | orchestrator | 2025-08-29 19:36:44.941426 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 19:36:44.941431 | orchestrator | Friday 29 August 2025 19:35:04 +0000 (0:00:00.621) 0:00:09.881 ********* 2025-08-29 19:36:44.941436 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:36:44.941441 | orchestrator | 2025-08-29 19:36:44.941446 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 19:36:44.941451 | orchestrator | Friday 29 August 2025 19:35:04 +0000 (0:00:00.141) 0:00:10.022 ********* 2025-08-29 19:36:44.941456 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:36:44.941465 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:36:44.941470 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:36:44.941475 | orchestrator | 2025-08-29 19:36:44.941480 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 19:36:44.941484 | orchestrator | Friday 29 August 2025 19:35:05 +0000 (0:00:00.350) 0:00:10.373 ********* 2025-08-29 19:36:44.941489 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:36:44.941494 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:36:44.941499 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:36:44.941504 | orchestrator | 2025-08-29 19:36:44.941509 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 19:36:44.941513 | orchestrator | Friday 29 August 2025 19:35:05 +0000 (0:00:00.336) 0:00:10.709 ********* 2025-08-29 19:36:44.941518 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:36:44.941523 | orchestrator | 2025-08-29 19:36:44.941528 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 19:36:44.941533 | orchestrator | Friday 29 August 2025 19:35:05 +0000 (0:00:00.140) 0:00:10.850 ********* 2025-08-29 19:36:44.941538 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:36:44.941543 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:36:44.941548 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:36:44.941552 | orchestrator | 2025-08-29 19:36:44.941557 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 19:36:44.941562 | orchestrator | Friday 29 August 2025 19:35:06 +0000 (0:00:00.309) 0:00:11.160 ********* 2025-08-29 19:36:44.941580 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:36:44.941586 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:36:44.941590 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:36:44.941596 | orchestrator | 2025-08-29 19:36:44.941600 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 19:36:44.941605 | orchestrator | Friday 29 August 2025 19:35:06 +0000 (0:00:00.526) 0:00:11.686 ********* 2025-08-29 19:36:44.941610 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:36:44.941615 | orchestrator | 2025-08-29 19:36:44.941620 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 19:36:44.941624 | orchestrator | Friday 29 August 2025 19:35:06 +0000 (0:00:00.139) 0:00:11.826 ********* 2025-08-29 19:36:44.941629 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:36:44.941634 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:36:44.941639 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:36:44.941644 | orchestrator | 2025-08-29 19:36:44.941648 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 19:36:44.941653 | orchestrator | Friday 29 August 2025 19:35:07 +0000 (0:00:00.347) 0:00:12.173 ********* 2025-08-29 19:36:44.941658 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:36:44.941663 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:36:44.941668 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:36:44.941672 | orchestrator | 2025-08-29 19:36:44.941677 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 19:36:44.941690 | orchestrator | Friday 29 August 2025 19:35:07 +0000 (0:00:00.319) 0:00:12.492 ********* 2025-08-29 19:36:44.941695 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:36:44.941699 | orchestrator | 2025-08-29 19:36:44.941704 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 19:36:44.941709 | orchestrator | Friday 29 August 2025 19:35:07 +0000 (0:00:00.127) 0:00:12.620 ********* 2025-08-29 19:36:44.941714 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:36:44.941719 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:36:44.941723 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:36:44.941728 | orchestrator | 2025-08-29 19:36:44.941733 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-08-29 19:36:44.941738 | orchestrator | Friday 29 August 2025 19:35:08 +0000 (0:00:00.567) 0:00:13.187 ********* 2025-08-29 19:36:44.941743 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:36:44.941754 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:36:44.941759 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:36:44.941763 | orchestrator | 2025-08-29 19:36:44.941768 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-08-29 19:36:44.941773 | orchestrator | Friday 29 August 2025 19:35:09 +0000 (0:00:01.647) 0:00:14.835 ********* 2025-08-29 19:36:44.941778 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-08-29 19:36:44.941783 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-08-29 19:36:44.941788 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-08-29 19:36:44.941792 | orchestrator | 2025-08-29 19:36:44.941797 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-08-29 19:36:44.941802 | orchestrator | Friday 29 August 2025 19:35:11 +0000 (0:00:01.896) 0:00:16.732 ********* 2025-08-29 19:36:44.941807 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-08-29 19:36:44.941811 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-08-29 19:36:44.941816 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-08-29 19:36:44.941821 | orchestrator | 2025-08-29 19:36:44.941826 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-08-29 19:36:44.941831 | orchestrator | Friday 29 August 2025 19:35:13 +0000 (0:00:02.130) 0:00:18.863 ********* 2025-08-29 19:36:44.941839 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-08-29 19:36:44.941844 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-08-29 19:36:44.941849 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-08-29 19:36:44.941854 | orchestrator | 2025-08-29 19:36:44.941859 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-08-29 19:36:44.941864 | orchestrator | Friday 29 August 2025 19:35:15 +0000 (0:00:01.962) 0:00:20.825 ********* 2025-08-29 19:36:44.941868 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:36:44.941873 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:36:44.941878 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:36:44.941883 | orchestrator | 2025-08-29 19:36:44.941888 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-08-29 19:36:44.941892 | orchestrator | Friday 29 August 2025 19:35:16 +0000 (0:00:00.321) 0:00:21.146 ********* 2025-08-29 19:36:44.941897 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:36:44.941902 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:36:44.941907 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:36:44.941911 | orchestrator | 2025-08-29 19:36:44.941916 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-08-29 19:36:44.941921 | orchestrator | Friday 29 August 2025 19:35:16 +0000 (0:00:00.299) 0:00:21.446 ********* 2025-08-29 19:36:44.941926 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:36:44.941931 | orchestrator | 2025-08-29 19:36:44.941936 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-08-29 19:36:44.941941 | orchestrator | Friday 29 August 2025 19:35:16 +0000 (0:00:00.586) 0:00:22.032 ********* 2025-08-29 19:36:44.941949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 19:36:44.941963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 19:36:44.941973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 19:36:44.941982 | orchestrator | 2025-08-29 19:36:44.941987 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-08-29 19:36:44.941992 | orchestrator | Friday 29 August 2025 19:35:18 +0000 (0:00:01.746) 0:00:23.779 ********* 2025-08-29 19:36:44.942002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 19:36:44.942011 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:36:44.942055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 19:36:44.942064 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:36:44.942070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 19:36:44.942080 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:36:44.942085 | orchestrator | 2025-08-29 19:36:44.942090 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-08-29 19:36:44.942097 | orchestrator | Friday 29 August 2025 19:35:19 +0000 (0:00:00.646) 0:00:24.425 ********* 2025-08-29 19:36:44.942107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 19:36:44.942113 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:36:44.942121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 19:36:44.942131 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:36:44.942140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 19:36:44.942146 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:36:44.942151 | orchestrator | 2025-08-29 19:36:44.942156 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-08-29 19:36:44.942161 | orchestrator | Friday 29 August 2025 19:35:20 +0000 (0:00:00.844) 0:00:25.270 ********* 2025-08-29 19:36:44.942170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 19:36:44.942183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 19:36:44.942197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 19:36:44.942203 | orchestrator | 2025-08-29 19:36:44.942208 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-08-29 19:36:44.942213 | orchestrator | Friday 29 August 2025 19:35:21 +0000 (0:00:01.759) 0:00:27.029 ********* 2025-08-29 19:36:44.942217 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:36:44.942222 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:36:44.942227 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:36:44.942232 | orchestrator | 2025-08-29 19:36:44.942237 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-08-29 19:36:44.942242 | orchestrator | Friday 29 August 2025 19:35:22 +0000 (0:00:00.326) 0:00:27.355 ********* 2025-08-29 19:36:44.942247 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:36:44.942252 | orchestrator | 2025-08-29 19:36:44.942256 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-08-29 19:36:44.942261 | orchestrator | Friday 29 August 2025 19:35:22 +0000 (0:00:00.525) 0:00:27.881 ********* 2025-08-29 19:36:44.942266 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:36:44.942271 | orchestrator | 2025-08-29 19:36:44.942279 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-08-29 19:36:44.942284 | orchestrator | Friday 29 August 2025 19:35:25 +0000 (0:00:02.299) 0:00:30.180 ********* 2025-08-29 19:36:44.942289 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:36:44.942294 | orchestrator | 2025-08-29 19:36:44.942299 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-08-29 19:36:44.942304 | orchestrator | Friday 29 August 2025 19:35:27 +0000 (0:00:02.661) 0:00:32.842 ********* 2025-08-29 19:36:44.942309 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:36:44.942318 | orchestrator | 2025-08-29 19:36:44.942323 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-08-29 19:36:44.942328 | orchestrator | Friday 29 August 2025 19:35:43 +0000 (0:00:15.790) 0:00:48.632 ********* 2025-08-29 19:36:44.942333 | orchestrator | 2025-08-29 19:36:44.942338 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-08-29 19:36:44.942342 | orchestrator | Friday 29 August 2025 19:35:43 +0000 (0:00:00.066) 0:00:48.699 ********* 2025-08-29 19:36:44.942347 | orchestrator | 2025-08-29 19:36:44.942352 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-08-29 19:36:44.942357 | orchestrator | Friday 29 August 2025 19:35:43 +0000 (0:00:00.087) 0:00:48.787 ********* 2025-08-29 19:36:44.942362 | orchestrator | 2025-08-29 19:36:44.942367 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-08-29 19:36:44.942372 | orchestrator | Friday 29 August 2025 19:35:43 +0000 (0:00:00.070) 0:00:48.857 ********* 2025-08-29 19:36:44.942376 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:36:44.942381 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:36:44.942386 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:36:44.942391 | orchestrator | 2025-08-29 19:36:44.942396 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:36:44.942401 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-08-29 19:36:44.942406 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-08-29 19:36:44.942411 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-08-29 19:36:44.942416 | orchestrator | 2025-08-29 19:36:44.942421 | orchestrator | 2025-08-29 19:36:44.942426 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:36:44.942431 | orchestrator | Friday 29 August 2025 19:36:41 +0000 (0:00:58.136) 0:01:46.994 ********* 2025-08-29 19:36:44.942435 | orchestrator | =============================================================================== 2025-08-29 19:36:44.942440 | orchestrator | horizon : Restart horizon container ------------------------------------ 58.14s 2025-08-29 19:36:44.942445 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.79s 2025-08-29 19:36:44.942450 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.66s 2025-08-29 19:36:44.942455 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.30s 2025-08-29 19:36:44.942462 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.13s 2025-08-29 19:36:44.942467 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.96s 2025-08-29 19:36:44.942472 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.90s 2025-08-29 19:36:44.942477 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.76s 2025-08-29 19:36:44.942482 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.75s 2025-08-29 19:36:44.942487 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.65s 2025-08-29 19:36:44.942492 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.19s 2025-08-29 19:36:44.942496 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.84s 2025-08-29 19:36:44.942501 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.75s 2025-08-29 19:36:44.942506 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.65s 2025-08-29 19:36:44.942511 | orchestrator | horizon : Update policy file name --------------------------------------- 0.62s 2025-08-29 19:36:44.942516 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.59s 2025-08-29 19:36:44.942521 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.58s 2025-08-29 19:36:44.942532 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.57s 2025-08-29 19:36:44.942537 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.53s 2025-08-29 19:36:44.942542 | orchestrator | horizon : Update policy file name --------------------------------------- 0.53s 2025-08-29 19:36:44.942546 | orchestrator | 2025-08-29 19:36:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:36:47.975993 | orchestrator | 2025-08-29 19:36:47 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:36:47.979732 | orchestrator | 2025-08-29 19:36:47 | INFO  | Task 6fdccf41-ec88-446d-aef8-3fc0a25a8089 is in state STARTED 2025-08-29 19:36:47.979805 | orchestrator | 2025-08-29 19:36:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:36:51.033649 | orchestrator | 2025-08-29 19:36:51 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:36:51.035129 | orchestrator | 2025-08-29 19:36:51 | INFO  | Task 6fdccf41-ec88-446d-aef8-3fc0a25a8089 is in state STARTED 2025-08-29 19:36:51.035195 | orchestrator | 2025-08-29 19:36:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:36:54.084626 | orchestrator | 2025-08-29 19:36:54 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:36:54.086961 | orchestrator | 2025-08-29 19:36:54 | INFO  | Task 6fdccf41-ec88-446d-aef8-3fc0a25a8089 is in state STARTED 2025-08-29 19:36:54.087010 | orchestrator | 2025-08-29 19:36:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:36:57.129295 | orchestrator | 2025-08-29 19:36:57 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:36:57.131394 | orchestrator | 2025-08-29 19:36:57 | INFO  | Task 6fdccf41-ec88-446d-aef8-3fc0a25a8089 is in state STARTED 2025-08-29 19:36:57.131437 | orchestrator | 2025-08-29 19:36:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:37:00.174316 | orchestrator | 2025-08-29 19:37:00 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:37:00.174741 | orchestrator | 2025-08-29 19:37:00 | INFO  | Task 6fdccf41-ec88-446d-aef8-3fc0a25a8089 is in state STARTED 2025-08-29 19:37:00.174773 | orchestrator | 2025-08-29 19:37:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:37:03.209996 | orchestrator | 2025-08-29 19:37:03 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:37:03.210930 | orchestrator | 2025-08-29 19:37:03 | INFO  | Task 6fdccf41-ec88-446d-aef8-3fc0a25a8089 is in state STARTED 2025-08-29 19:37:03.210963 | orchestrator | 2025-08-29 19:37:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:37:06.255698 | orchestrator | 2025-08-29 19:37:06 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:37:06.256794 | orchestrator | 2025-08-29 19:37:06 | INFO  | Task 6fdccf41-ec88-446d-aef8-3fc0a25a8089 is in state STARTED 2025-08-29 19:37:06.257123 | orchestrator | 2025-08-29 19:37:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:37:09.298249 | orchestrator | 2025-08-29 19:37:09 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:37:09.299913 | orchestrator | 2025-08-29 19:37:09 | INFO  | Task 6fdccf41-ec88-446d-aef8-3fc0a25a8089 is in state STARTED 2025-08-29 19:37:09.299983 | orchestrator | 2025-08-29 19:37:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:37:12.349405 | orchestrator | 2025-08-29 19:37:12 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:37:12.351603 | orchestrator | 2025-08-29 19:37:12 | INFO  | Task 6fdccf41-ec88-446d-aef8-3fc0a25a8089 is in state STARTED 2025-08-29 19:37:12.351701 | orchestrator | 2025-08-29 19:37:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:37:15.391839 | orchestrator | 2025-08-29 19:37:15 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:37:15.394491 | orchestrator | 2025-08-29 19:37:15 | INFO  | Task 6fdccf41-ec88-446d-aef8-3fc0a25a8089 is in state STARTED 2025-08-29 19:37:15.395146 | orchestrator | 2025-08-29 19:37:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:37:18.438103 | orchestrator | 2025-08-29 19:37:18 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:37:18.440038 | orchestrator | 2025-08-29 19:37:18 | INFO  | Task 6fdccf41-ec88-446d-aef8-3fc0a25a8089 is in state STARTED 2025-08-29 19:37:18.440083 | orchestrator | 2025-08-29 19:37:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:37:21.482614 | orchestrator | 2025-08-29 19:37:21 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:37:21.484065 | orchestrator | 2025-08-29 19:37:21 | INFO  | Task 6fdccf41-ec88-446d-aef8-3fc0a25a8089 is in state STARTED 2025-08-29 19:37:21.484109 | orchestrator | 2025-08-29 19:37:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:37:24.531234 | orchestrator | 2025-08-29 19:37:24 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:37:24.533377 | orchestrator | 2025-08-29 19:37:24 | INFO  | Task 6fdccf41-ec88-446d-aef8-3fc0a25a8089 is in state STARTED 2025-08-29 19:37:24.533409 | orchestrator | 2025-08-29 19:37:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:37:27.574347 | orchestrator | 2025-08-29 19:37:27 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:37:27.574785 | orchestrator | 2025-08-29 19:37:27 | INFO  | Task 6fdccf41-ec88-446d-aef8-3fc0a25a8089 is in state STARTED 2025-08-29 19:37:27.575415 | orchestrator | 2025-08-29 19:37:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:37:30.614910 | orchestrator | 2025-08-29 19:37:30 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:37:30.615383 | orchestrator | 2025-08-29 19:37:30 | INFO  | Task 6fdccf41-ec88-446d-aef8-3fc0a25a8089 is in state STARTED 2025-08-29 19:37:30.615422 | orchestrator | 2025-08-29 19:37:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:37:33.659593 | orchestrator | 2025-08-29 19:37:33 | INFO  | Task d4cea722-fa93-454a-9546-bbc1ad47da8b is in state STARTED 2025-08-29 19:37:33.662713 | orchestrator | 2025-08-29 19:37:33 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:37:33.665486 | orchestrator | 2025-08-29 19:37:33 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state STARTED 2025-08-29 19:37:33.667266 | orchestrator | 2025-08-29 19:37:33 | INFO  | Task bb786318-d262-4ca1-92fe-6aa5f882bdb4 is in state STARTED 2025-08-29 19:37:33.670866 | orchestrator | 2025-08-29 19:37:33 | INFO  | Task 6fdccf41-ec88-446d-aef8-3fc0a25a8089 is in state SUCCESS 2025-08-29 19:37:33.670939 | orchestrator | 2025-08-29 19:37:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:37:36.699580 | orchestrator | 2025-08-29 19:37:36 | INFO  | Task d4cea722-fa93-454a-9546-bbc1ad47da8b is in state STARTED 2025-08-29 19:37:36.699693 | orchestrator | 2025-08-29 19:37:36 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:37:36.700281 | orchestrator | 2025-08-29 19:37:36 | INFO  | Task c901c6cd-4faa-4aef-994b-6aa012fa246b is in state SUCCESS 2025-08-29 19:37:36.701922 | orchestrator | 2025-08-29 19:37:36.701991 | orchestrator | 2025-08-29 19:37:36.702006 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-08-29 19:37:36.703052 | orchestrator | 2025-08-29 19:37:36.703076 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-08-29 19:37:36.703087 | orchestrator | Friday 29 August 2025 19:36:39 +0000 (0:00:00.233) 0:00:00.233 ********* 2025-08-29 19:37:36.703098 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-08-29 19:37:36.703110 | orchestrator | 2025-08-29 19:37:36.703120 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-08-29 19:37:36.703131 | orchestrator | Friday 29 August 2025 19:36:39 +0000 (0:00:00.236) 0:00:00.469 ********* 2025-08-29 19:37:36.703142 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-08-29 19:37:36.703152 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-08-29 19:37:36.703163 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-08-29 19:37:36.703174 | orchestrator | 2025-08-29 19:37:36.703191 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-08-29 19:37:36.703201 | orchestrator | Friday 29 August 2025 19:36:40 +0000 (0:00:01.292) 0:00:01.762 ********* 2025-08-29 19:37:36.703228 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-08-29 19:37:36.703239 | orchestrator | 2025-08-29 19:37:36.703258 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-08-29 19:37:36.703268 | orchestrator | Friday 29 August 2025 19:36:41 +0000 (0:00:01.214) 0:00:02.977 ********* 2025-08-29 19:37:36.703278 | orchestrator | changed: [testbed-manager] 2025-08-29 19:37:36.703288 | orchestrator | 2025-08-29 19:37:36.703297 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-08-29 19:37:36.703317 | orchestrator | Friday 29 August 2025 19:36:42 +0000 (0:00:01.046) 0:00:04.023 ********* 2025-08-29 19:37:36.703328 | orchestrator | changed: [testbed-manager] 2025-08-29 19:37:36.703347 | orchestrator | 2025-08-29 19:37:36.703356 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-08-29 19:37:36.703366 | orchestrator | Friday 29 August 2025 19:36:43 +0000 (0:00:00.749) 0:00:04.773 ********* 2025-08-29 19:37:36.703376 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-08-29 19:37:36.703386 | orchestrator | ok: [testbed-manager] 2025-08-29 19:37:36.703395 | orchestrator | 2025-08-29 19:37:36.703405 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-08-29 19:37:36.703415 | orchestrator | Friday 29 August 2025 19:37:21 +0000 (0:00:37.699) 0:00:42.473 ********* 2025-08-29 19:37:36.703424 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-08-29 19:37:36.703434 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-08-29 19:37:36.703444 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-08-29 19:37:36.703454 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-08-29 19:37:36.703463 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-08-29 19:37:36.703473 | orchestrator | 2025-08-29 19:37:36.703483 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-08-29 19:37:36.703492 | orchestrator | Friday 29 August 2025 19:37:25 +0000 (0:00:04.011) 0:00:46.484 ********* 2025-08-29 19:37:36.703519 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-08-29 19:37:36.703530 | orchestrator | 2025-08-29 19:37:36.703540 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-08-29 19:37:36.703549 | orchestrator | Friday 29 August 2025 19:37:25 +0000 (0:00:00.455) 0:00:46.940 ********* 2025-08-29 19:37:36.703559 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:37:36.703568 | orchestrator | 2025-08-29 19:37:36.703578 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-08-29 19:37:36.703587 | orchestrator | Friday 29 August 2025 19:37:25 +0000 (0:00:00.142) 0:00:47.083 ********* 2025-08-29 19:37:36.703615 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:37:36.703625 | orchestrator | 2025-08-29 19:37:36.703635 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-08-29 19:37:36.703645 | orchestrator | Friday 29 August 2025 19:37:26 +0000 (0:00:00.309) 0:00:47.393 ********* 2025-08-29 19:37:36.703654 | orchestrator | changed: [testbed-manager] 2025-08-29 19:37:36.703663 | orchestrator | 2025-08-29 19:37:36.703674 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-08-29 19:37:36.703684 | orchestrator | Friday 29 August 2025 19:37:28 +0000 (0:00:01.867) 0:00:49.260 ********* 2025-08-29 19:37:36.703693 | orchestrator | changed: [testbed-manager] 2025-08-29 19:37:36.703702 | orchestrator | 2025-08-29 19:37:36.703712 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-08-29 19:37:36.703722 | orchestrator | Friday 29 August 2025 19:37:28 +0000 (0:00:00.756) 0:00:50.016 ********* 2025-08-29 19:37:36.703731 | orchestrator | changed: [testbed-manager] 2025-08-29 19:37:36.703740 | orchestrator | 2025-08-29 19:37:36.703750 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-08-29 19:37:36.703759 | orchestrator | Friday 29 August 2025 19:37:29 +0000 (0:00:00.699) 0:00:50.715 ********* 2025-08-29 19:37:36.703769 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-08-29 19:37:36.703778 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-08-29 19:37:36.703788 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-08-29 19:37:36.703798 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-08-29 19:37:36.703807 | orchestrator | 2025-08-29 19:37:36.703817 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:37:36.703827 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 19:37:36.703837 | orchestrator | 2025-08-29 19:37:36.703847 | orchestrator | 2025-08-29 19:37:36.703906 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:37:36.703919 | orchestrator | Friday 29 August 2025 19:37:30 +0000 (0:00:01.477) 0:00:52.193 ********* 2025-08-29 19:37:36.703929 | orchestrator | =============================================================================== 2025-08-29 19:37:36.703939 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 37.70s 2025-08-29 19:37:36.703948 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.01s 2025-08-29 19:37:36.703958 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.87s 2025-08-29 19:37:36.703968 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.48s 2025-08-29 19:37:36.703977 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.29s 2025-08-29 19:37:36.703987 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.21s 2025-08-29 19:37:36.703996 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.05s 2025-08-29 19:37:36.704006 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.76s 2025-08-29 19:37:36.704016 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.75s 2025-08-29 19:37:36.704025 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.70s 2025-08-29 19:37:36.704035 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.46s 2025-08-29 19:37:36.704044 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.31s 2025-08-29 19:37:36.704054 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.24s 2025-08-29 19:37:36.704063 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2025-08-29 19:37:36.704073 | orchestrator | 2025-08-29 19:37:36.704082 | orchestrator | 2025-08-29 19:37:36.704092 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 19:37:36.704109 | orchestrator | 2025-08-29 19:37:36.704119 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 19:37:36.704128 | orchestrator | Friday 29 August 2025 19:34:55 +0000 (0:00:00.265) 0:00:00.265 ********* 2025-08-29 19:37:36.704138 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:37:36.704148 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:37:36.704158 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:37:36.704168 | orchestrator | 2025-08-29 19:37:36.704177 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 19:37:36.704187 | orchestrator | Friday 29 August 2025 19:34:55 +0000 (0:00:00.314) 0:00:00.580 ********* 2025-08-29 19:37:36.704197 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-08-29 19:37:36.704206 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-08-29 19:37:36.704216 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-08-29 19:37:36.704226 | orchestrator | 2025-08-29 19:37:36.704236 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-08-29 19:37:36.704245 | orchestrator | 2025-08-29 19:37:36.704255 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 19:37:36.704265 | orchestrator | Friday 29 August 2025 19:34:55 +0000 (0:00:00.410) 0:00:00.990 ********* 2025-08-29 19:37:36.704275 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:37:36.704285 | orchestrator | 2025-08-29 19:37:36.704295 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-08-29 19:37:36.704305 | orchestrator | Friday 29 August 2025 19:34:56 +0000 (0:00:00.542) 0:00:01.533 ********* 2025-08-29 19:37:36.704388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 19:37:36.704450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 19:37:36.704470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 19:37:36.704490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 19:37:36.704569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 19:37:36.704581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 19:37:36.704592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 19:37:36.704612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 19:37:36.704627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 19:37:36.704644 | orchestrator | 2025-08-29 19:37:36.704654 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-08-29 19:37:36.704664 | orchestrator | Friday 29 August 2025 19:34:58 +0000 (0:00:01.777) 0:00:03.311 ********* 2025-08-29 19:37:36.704674 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-08-29 19:37:36.704684 | orchestrator | 2025-08-29 19:37:36.704694 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-08-29 19:37:36.704704 | orchestrator | Friday 29 August 2025 19:34:59 +0000 (0:00:00.847) 0:00:04.159 ********* 2025-08-29 19:37:36.704714 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:37:36.704723 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:37:36.704733 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:37:36.704742 | orchestrator | 2025-08-29 19:37:36.704752 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-08-29 19:37:36.704762 | orchestrator | Friday 29 August 2025 19:34:59 +0000 (0:00:00.492) 0:00:04.651 ********* 2025-08-29 19:37:36.704771 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 19:37:36.704781 | orchestrator | 2025-08-29 19:37:36.704791 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 19:37:36.704801 | orchestrator | Friday 29 August 2025 19:35:00 +0000 (0:00:00.710) 0:00:05.362 ********* 2025-08-29 19:37:36.704810 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:37:36.704820 | orchestrator | 2025-08-29 19:37:36.704830 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-08-29 19:37:36.704839 | orchestrator | Friday 29 August 2025 19:35:00 +0000 (0:00:00.537) 0:00:05.899 ********* 2025-08-29 19:37:36.704850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 19:37:36.704869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 19:37:36.704893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 19:37:36.704905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 19:37:36.704915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 19:37:36.704926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 19:37:36.704935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 19:37:36.704951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 19:37:36.704969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 19:37:36.704979 | orchestrator | 2025-08-29 19:37:36.704993 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-08-29 19:37:36.705003 | orchestrator | Friday 29 August 2025 19:35:04 +0000 (0:00:03.308) 0:00:09.207 ********* 2025-08-29 19:37:36.705013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 19:37:36.705024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 19:37:36.705035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 19:37:36.705045 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:37:36.705061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 19:37:36.705079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 19:37:36.705093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 19:37:36.705104 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:37:36.705114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 19:37:36.705125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 19:37:36.705135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 19:37:36.705150 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:37:36.705160 | orchestrator | 2025-08-29 19:37:36.705170 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-08-29 19:37:36.705180 | orchestrator | Friday 29 August 2025 19:35:04 +0000 (0:00:00.822) 0:00:10.030 ********* 2025-08-29 19:37:36.705197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 19:37:36.705213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 19:37:36.705224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 19:37:36.705234 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:37:36.705245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 19:37:36.705255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 19:37:36.705277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 19:37:36.705288 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:37:36.705305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 19:37:36.705316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 19:37:36.705326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 19:37:36.705336 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:37:36.705345 | orchestrator | 2025-08-29 19:37:36.705355 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-08-29 19:37:36.705365 | orchestrator | Friday 29 August 2025 19:35:05 +0000 (0:00:00.758) 0:00:10.789 ********* 2025-08-29 19:37:36.705375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 19:37:36.705397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 19:37:36.705413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 19:37:36.705424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 19:37:36.705434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 19:37:36.705450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 19:37:36.705460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 19:37:36.705475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 19:37:36.705490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 19:37:36.705519 | orchestrator | 2025-08-29 19:37:36.705529 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-08-29 19:37:36.705539 | orchestrator | Friday 29 August 2025 19:35:08 +0000 (0:00:03.228) 0:00:14.018 ********* 2025-08-29 19:37:36.705549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 19:37:36.705560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 19:37:36.705577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 19:37:36.705596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 19:37:36.705611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 19:37:36.705622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 19:37:36.705633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 19:37:36.705648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 19:37:36.705658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 19:37:36.705669 | orchestrator | 2025-08-29 19:37:36.705686 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-08-29 19:37:36.705702 | orchestrator | Friday 29 August 2025 19:35:14 +0000 (0:00:05.628) 0:00:19.647 ********* 2025-08-29 19:37:36.705718 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:37:36.705739 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:37:36.705755 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:37:36.705769 | orchestrator | 2025-08-29 19:37:36.705785 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-08-29 19:37:36.705801 | orchestrator | Friday 29 August 2025 19:35:15 +0000 (0:00:01.452) 0:00:21.099 ********* 2025-08-29 19:37:36.705814 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:37:36.705828 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:37:36.705842 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:37:36.705858 | orchestrator | 2025-08-29 19:37:36.705873 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-08-29 19:37:36.705891 | orchestrator | Friday 29 August 2025 19:35:16 +0000 (0:00:00.553) 0:00:21.652 ********* 2025-08-29 19:37:36.705909 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:37:36.705926 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:37:36.705941 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:37:36.705958 | orchestrator | 2025-08-29 19:37:36.705974 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-08-29 19:37:36.705990 | orchestrator | Friday 29 August 2025 19:35:16 +0000 (0:00:00.321) 0:00:21.974 ********* 2025-08-29 19:37:36.706006 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:37:36.706067 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:37:36.706077 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:37:36.706087 | orchestrator | 2025-08-29 19:37:36.706097 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-08-29 19:37:36.706107 | orchestrator | Friday 29 August 2025 19:35:17 +0000 (0:00:00.507) 0:00:22.481 ********* 2025-08-29 19:37:36.706118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 19:37:36.706138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 19:37:36.706149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 19:37:36.706168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 19:37:36.706183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 19:37:36.706194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 19:37:36.706210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 19:37:36.706220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 19:37:36.706230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 19:37:36.706240 | orchestrator | 2025-08-29 19:37:36.706250 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 19:37:36.706260 | orchestrator | Friday 29 August 2025 19:35:19 +0000 (0:00:02.540) 0:00:25.022 ********* 2025-08-29 19:37:36.706270 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:37:36.706280 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:37:36.706290 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:37:36.706299 | orchestrator | 2025-08-29 19:37:36.706309 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-08-29 19:37:36.706318 | orchestrator | Friday 29 August 2025 19:35:20 +0000 (0:00:00.328) 0:00:25.350 ********* 2025-08-29 19:37:36.706334 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-08-29 19:37:36.706344 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-08-29 19:37:36.706354 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-08-29 19:37:36.706364 | orchestrator | 2025-08-29 19:37:36.706373 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-08-29 19:37:36.706383 | orchestrator | Friday 29 August 2025 19:35:22 +0000 (0:00:01.761) 0:00:27.111 ********* 2025-08-29 19:37:36.706393 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 19:37:36.706402 | orchestrator | 2025-08-29 19:37:36.706412 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-08-29 19:37:36.706427 | orchestrator | Friday 29 August 2025 19:35:22 +0000 (0:00:00.985) 0:00:28.097 ********* 2025-08-29 19:37:36.706436 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:37:36.706446 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:37:36.706460 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:37:36.706470 | orchestrator | 2025-08-29 19:37:36.706480 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-08-29 19:37:36.706489 | orchestrator | Friday 29 August 2025 19:35:23 +0000 (0:00:00.770) 0:00:28.867 ********* 2025-08-29 19:37:36.706517 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 19:37:36.706528 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-08-29 19:37:36.706537 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-08-29 19:37:36.706547 | orchestrator | 2025-08-29 19:37:36.706557 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-08-29 19:37:36.706566 | orchestrator | Friday 29 August 2025 19:35:24 +0000 (0:00:01.028) 0:00:29.896 ********* 2025-08-29 19:37:36.706576 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:37:36.706585 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:37:36.706595 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:37:36.706605 | orchestrator | 2025-08-29 19:37:36.706614 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-08-29 19:37:36.706624 | orchestrator | Friday 29 August 2025 19:35:25 +0000 (0:00:00.317) 0:00:30.214 ********* 2025-08-29 19:37:36.706633 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-08-29 19:37:36.706643 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-08-29 19:37:36.706653 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-08-29 19:37:36.706662 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-08-29 19:37:36.706672 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-08-29 19:37:36.706681 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-08-29 19:37:36.706691 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-08-29 19:37:36.706701 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-08-29 19:37:36.706710 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-08-29 19:37:36.706720 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-08-29 19:37:36.706729 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-08-29 19:37:36.706739 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-08-29 19:37:36.706748 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-08-29 19:37:36.706758 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-08-29 19:37:36.706767 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-08-29 19:37:36.706777 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 19:37:36.706787 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 19:37:36.706797 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 19:37:36.706806 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 19:37:36.706816 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 19:37:36.706825 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 19:37:36.706841 | orchestrator | 2025-08-29 19:37:36.706851 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-08-29 19:37:36.706861 | orchestrator | Friday 29 August 2025 19:35:34 +0000 (0:00:09.109) 0:00:39.323 ********* 2025-08-29 19:37:36.706870 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 19:37:36.706879 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 19:37:36.706889 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 19:37:36.706904 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 19:37:36.706914 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 19:37:36.706924 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 19:37:36.706933 | orchestrator | 2025-08-29 19:37:36.706943 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-08-29 19:37:36.706952 | orchestrator | Friday 29 August 2025 19:35:37 +0000 (0:00:02.809) 0:00:42.133 ********* 2025-08-29 19:37:36.706970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 19:37:36.706982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 19:37:36.706993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 19:37:36.707010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 19:37:36.707026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 19:37:36.707041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 19:37:36.707051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 19:37:36.707061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 19:37:36.707071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 19:37:36.707089 | orchestrator | 2025-08-29 19:37:36.707099 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 19:37:36.707109 | orchestrator | Friday 29 August 2025 19:35:39 +0000 (0:00:02.282) 0:00:44.415 ********* 2025-08-29 19:37:36.707118 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:37:36.707128 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:37:36.707138 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:37:36.707147 | orchestrator | 2025-08-29 19:37:36.707157 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-08-29 19:37:36.707166 | orchestrator | Friday 29 August 2025 19:35:39 +0000 (0:00:00.305) 0:00:44.721 ********* 2025-08-29 19:37:36.707176 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:37:36.707186 | orchestrator | 2025-08-29 19:37:36.707195 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-08-29 19:37:36.707205 | orchestrator | Friday 29 August 2025 19:35:41 +0000 (0:00:02.186) 0:00:46.907 ********* 2025-08-29 19:37:36.707214 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:37:36.707224 | orchestrator | 2025-08-29 19:37:36.707233 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-08-29 19:37:36.707243 | orchestrator | Friday 29 August 2025 19:35:43 +0000 (0:00:02.145) 0:00:49.053 ********* 2025-08-29 19:37:36.707252 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:37:36.707262 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:37:36.707271 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:37:36.707281 | orchestrator | 2025-08-29 19:37:36.707290 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-08-29 19:37:36.707305 | orchestrator | Friday 29 August 2025 19:35:44 +0000 (0:00:00.965) 0:00:50.019 ********* 2025-08-29 19:37:36.707315 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:37:36.707324 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:37:36.707334 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:37:36.707343 | orchestrator | 2025-08-29 19:37:36.707353 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-08-29 19:37:36.707363 | orchestrator | Friday 29 August 2025 19:35:45 +0000 (0:00:00.744) 0:00:50.764 ********* 2025-08-29 19:37:36.707372 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:37:36.707382 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:37:36.707391 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:37:36.707401 | orchestrator | 2025-08-29 19:37:36.707410 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-08-29 19:37:36.707420 | orchestrator | Friday 29 August 2025 19:35:46 +0000 (0:00:00.451) 0:00:51.215 ********* 2025-08-29 19:37:36.707429 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:37:36.707439 | orchestrator | 2025-08-29 19:37:36.707448 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-08-29 19:37:36.707458 | orchestrator | Friday 29 August 2025 19:35:59 +0000 (0:00:13.657) 0:01:04.873 ********* 2025-08-29 19:37:36.707472 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:37:36.707481 | orchestrator | 2025-08-29 19:37:36.707491 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-08-29 19:37:36.707517 | orchestrator | Friday 29 August 2025 19:36:09 +0000 (0:00:09.881) 0:01:14.754 ********* 2025-08-29 19:37:36.707527 | orchestrator | 2025-08-29 19:37:36.707537 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-08-29 19:37:36.707547 | orchestrator | Friday 29 August 2025 19:36:09 +0000 (0:00:00.068) 0:01:14.822 ********* 2025-08-29 19:37:36.707556 | orchestrator | 2025-08-29 19:37:36.707566 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-08-29 19:37:36.707575 | orchestrator | Friday 29 August 2025 19:36:09 +0000 (0:00:00.065) 0:01:14.888 ********* 2025-08-29 19:37:36.707585 | orchestrator | 2025-08-29 19:37:36.707594 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-08-29 19:37:36.707603 | orchestrator | Friday 29 August 2025 19:36:09 +0000 (0:00:00.067) 0:01:14.955 ********* 2025-08-29 19:37:36.707622 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:37:36.707632 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:37:36.707642 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:37:36.707651 | orchestrator | 2025-08-29 19:37:36.707661 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-08-29 19:37:36.707670 | orchestrator | Friday 29 August 2025 19:36:37 +0000 (0:00:27.436) 0:01:42.392 ********* 2025-08-29 19:37:36.707680 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:37:36.707689 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:37:36.707699 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:37:36.707708 | orchestrator | 2025-08-29 19:37:36.707718 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-08-29 19:37:36.707727 | orchestrator | Friday 29 August 2025 19:36:42 +0000 (0:00:05.375) 0:01:47.768 ********* 2025-08-29 19:37:36.707737 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:37:36.707746 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:37:36.707756 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:37:36.707766 | orchestrator | 2025-08-29 19:37:36.707775 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 19:37:36.707784 | orchestrator | Friday 29 August 2025 19:36:48 +0000 (0:00:06.236) 0:01:54.004 ********* 2025-08-29 19:37:36.707794 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:37:36.707804 | orchestrator | 2025-08-29 19:37:36.707813 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-08-29 19:37:36.707823 | orchestrator | Friday 29 August 2025 19:36:49 +0000 (0:00:00.775) 0:01:54.779 ********* 2025-08-29 19:37:36.707832 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:37:36.707841 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:37:36.707851 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:37:36.707861 | orchestrator | 2025-08-29 19:37:36.707870 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-08-29 19:37:36.707880 | orchestrator | Friday 29 August 2025 19:36:50 +0000 (0:00:00.737) 0:01:55.517 ********* 2025-08-29 19:37:36.707889 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:37:36.707899 | orchestrator | 2025-08-29 19:37:36.707908 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-08-29 19:37:36.707918 | orchestrator | Friday 29 August 2025 19:36:52 +0000 (0:00:01.889) 0:01:57.406 ********* 2025-08-29 19:37:36.707927 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-08-29 19:37:36.707936 | orchestrator | 2025-08-29 19:37:36.707946 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-08-29 19:37:36.707955 | orchestrator | Friday 29 August 2025 19:37:02 +0000 (0:00:10.421) 0:02:07.827 ********* 2025-08-29 19:37:36.707965 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-08-29 19:37:36.707974 | orchestrator | 2025-08-29 19:37:36.707984 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-08-29 19:37:36.707993 | orchestrator | Friday 29 August 2025 19:37:23 +0000 (0:00:20.912) 0:02:28.739 ********* 2025-08-29 19:37:36.708003 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-08-29 19:37:36.708012 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-08-29 19:37:36.708022 | orchestrator | 2025-08-29 19:37:36.708031 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-08-29 19:37:36.708041 | orchestrator | Friday 29 August 2025 19:37:30 +0000 (0:00:06.734) 0:02:35.474 ********* 2025-08-29 19:37:36.708050 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:37:36.708060 | orchestrator | 2025-08-29 19:37:36.708069 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-08-29 19:37:36.708079 | orchestrator | Friday 29 August 2025 19:37:30 +0000 (0:00:00.125) 0:02:35.600 ********* 2025-08-29 19:37:36.708088 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:37:36.708104 | orchestrator | 2025-08-29 19:37:36.708118 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-08-29 19:37:36.708129 | orchestrator | Friday 29 August 2025 19:37:30 +0000 (0:00:00.117) 0:02:35.717 ********* 2025-08-29 19:37:36.708138 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:37:36.708148 | orchestrator | 2025-08-29 19:37:36.708157 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-08-29 19:37:36.708167 | orchestrator | Friday 29 August 2025 19:37:30 +0000 (0:00:00.120) 0:02:35.838 ********* 2025-08-29 19:37:36.708177 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:37:36.708186 | orchestrator | 2025-08-29 19:37:36.708196 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-08-29 19:37:36.708206 | orchestrator | Friday 29 August 2025 19:37:31 +0000 (0:00:00.529) 0:02:36.367 ********* 2025-08-29 19:37:36.708215 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:37:36.708225 | orchestrator | 2025-08-29 19:37:36.708234 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 19:37:36.708244 | orchestrator | Friday 29 August 2025 19:37:34 +0000 (0:00:03.330) 0:02:39.698 ********* 2025-08-29 19:37:36.708253 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:37:36.708268 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:37:36.708278 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:37:36.708287 | orchestrator | 2025-08-29 19:37:36.708297 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:37:36.708307 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-08-29 19:37:36.708317 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-08-29 19:37:36.708327 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-08-29 19:37:36.708337 | orchestrator | 2025-08-29 19:37:36.708346 | orchestrator | 2025-08-29 19:37:36.708356 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:37:36.708365 | orchestrator | Friday 29 August 2025 19:37:35 +0000 (0:00:00.481) 0:02:40.179 ********* 2025-08-29 19:37:36.708375 | orchestrator | =============================================================================== 2025-08-29 19:37:36.708384 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 27.44s 2025-08-29 19:37:36.708394 | orchestrator | service-ks-register : keystone | Creating services --------------------- 20.91s 2025-08-29 19:37:36.708403 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.66s 2025-08-29 19:37:36.708413 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.42s 2025-08-29 19:37:36.708422 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.88s 2025-08-29 19:37:36.708435 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.11s 2025-08-29 19:37:36.708452 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.73s 2025-08-29 19:37:36.708467 | orchestrator | keystone : Restart keystone container ----------------------------------- 6.24s 2025-08-29 19:37:36.708483 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.63s 2025-08-29 19:37:36.708529 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 5.38s 2025-08-29 19:37:36.708549 | orchestrator | keystone : Creating default user role ----------------------------------- 3.33s 2025-08-29 19:37:36.708560 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.31s 2025-08-29 19:37:36.708570 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.23s 2025-08-29 19:37:36.708580 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.81s 2025-08-29 19:37:36.708589 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.54s 2025-08-29 19:37:36.708607 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.28s 2025-08-29 19:37:36.708616 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.19s 2025-08-29 19:37:36.708626 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.15s 2025-08-29 19:37:36.708635 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.89s 2025-08-29 19:37:36.708645 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.78s 2025-08-29 19:37:36.708654 | orchestrator | 2025-08-29 19:37:36 | INFO  | Task bb786318-d262-4ca1-92fe-6aa5f882bdb4 is in state STARTED 2025-08-29 19:37:36.708664 | orchestrator | 2025-08-29 19:37:36 | INFO  | Task 6913cdcc-e289-4c14-8b74-3fa4d1f0cf93 is in state STARTED 2025-08-29 19:37:36.708674 | orchestrator | 2025-08-29 19:37:36 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:37:36.708684 | orchestrator | 2025-08-29 19:37:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:37:39.731955 | orchestrator | 2025-08-29 19:37:39 | INFO  | Task d4cea722-fa93-454a-9546-bbc1ad47da8b is in state SUCCESS 2025-08-29 19:37:39.732108 | orchestrator | 2025-08-29 19:37:39 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:37:39.732919 | orchestrator | 2025-08-29 19:37:39 | INFO  | Task bb786318-d262-4ca1-92fe-6aa5f882bdb4 is in state STARTED 2025-08-29 19:37:39.735977 | orchestrator | 2025-08-29 19:37:39 | INFO  | Task 6913cdcc-e289-4c14-8b74-3fa4d1f0cf93 is in state STARTED 2025-08-29 19:37:39.736087 | orchestrator | 2025-08-29 19:37:39 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:37:39.736914 | orchestrator | 2025-08-29 19:37:39 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:37:39.736975 | orchestrator | 2025-08-29 19:37:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:37:42.775339 | orchestrator | 2025-08-29 19:37:42 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:37:42.775452 | orchestrator | 2025-08-29 19:37:42 | INFO  | Task bb786318-d262-4ca1-92fe-6aa5f882bdb4 is in state STARTED 2025-08-29 19:37:42.775548 | orchestrator | 2025-08-29 19:37:42 | INFO  | Task 6913cdcc-e289-4c14-8b74-3fa4d1f0cf93 is in state STARTED 2025-08-29 19:37:42.775566 | orchestrator | 2025-08-29 19:37:42 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:37:42.775578 | orchestrator | 2025-08-29 19:37:42 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:37:42.775589 | orchestrator | 2025-08-29 19:37:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:37:45.795406 | orchestrator | 2025-08-29 19:37:45 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:37:45.796414 | orchestrator | 2025-08-29 19:37:45 | INFO  | Task bb786318-d262-4ca1-92fe-6aa5f882bdb4 is in state STARTED 2025-08-29 19:37:45.797436 | orchestrator | 2025-08-29 19:37:45 | INFO  | Task 6913cdcc-e289-4c14-8b74-3fa4d1f0cf93 is in state STARTED 2025-08-29 19:37:45.799676 | orchestrator | 2025-08-29 19:37:45 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:37:45.799766 | orchestrator | 2025-08-29 19:37:45 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:37:45.799783 | orchestrator | 2025-08-29 19:37:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:37:48.837450 | orchestrator | 2025-08-29 19:37:48 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:37:48.840787 | orchestrator | 2025-08-29 19:37:48 | INFO  | Task bb786318-d262-4ca1-92fe-6aa5f882bdb4 is in state STARTED 2025-08-29 19:37:48.842833 | orchestrator | 2025-08-29 19:37:48 | INFO  | Task 6913cdcc-e289-4c14-8b74-3fa4d1f0cf93 is in state STARTED 2025-08-29 19:37:48.844319 | orchestrator | 2025-08-29 19:37:48 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:37:48.846439 | orchestrator | 2025-08-29 19:37:48 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:37:48.846465 | orchestrator | 2025-08-29 19:37:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:37:51.890799 | orchestrator | 2025-08-29 19:37:51 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:37:51.891001 | orchestrator | 2025-08-29 19:37:51 | INFO  | Task bb786318-d262-4ca1-92fe-6aa5f882bdb4 is in state STARTED 2025-08-29 19:37:51.892061 | orchestrator | 2025-08-29 19:37:51 | INFO  | Task 6913cdcc-e289-4c14-8b74-3fa4d1f0cf93 is in state STARTED 2025-08-29 19:37:51.893365 | orchestrator | 2025-08-29 19:37:51 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:37:51.893879 | orchestrator | 2025-08-29 19:37:51 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:37:51.894118 | orchestrator | 2025-08-29 19:37:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:37:54.929456 | orchestrator | 2025-08-29 19:37:54 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:37:54.930265 | orchestrator | 2025-08-29 19:37:54 | INFO  | Task bb786318-d262-4ca1-92fe-6aa5f882bdb4 is in state STARTED 2025-08-29 19:37:54.931369 | orchestrator | 2025-08-29 19:37:54 | INFO  | Task 6913cdcc-e289-4c14-8b74-3fa4d1f0cf93 is in state STARTED 2025-08-29 19:37:54.934616 | orchestrator | 2025-08-29 19:37:54 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:37:54.935546 | orchestrator | 2025-08-29 19:37:54 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:37:54.935692 | orchestrator | 2025-08-29 19:37:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:37:57.975257 | orchestrator | 2025-08-29 19:37:57 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:37:57.978871 | orchestrator | 2025-08-29 19:37:57 | INFO  | Task bb786318-d262-4ca1-92fe-6aa5f882bdb4 is in state STARTED 2025-08-29 19:37:57.981232 | orchestrator | 2025-08-29 19:37:57 | INFO  | Task 6913cdcc-e289-4c14-8b74-3fa4d1f0cf93 is in state STARTED 2025-08-29 19:37:57.982512 | orchestrator | 2025-08-29 19:37:57 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:37:57.984404 | orchestrator | 2025-08-29 19:37:57 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:37:57.984446 | orchestrator | 2025-08-29 19:37:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:38:01.026209 | orchestrator | 2025-08-29 19:38:01 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:38:01.027387 | orchestrator | 2025-08-29 19:38:01 | INFO  | Task bb786318-d262-4ca1-92fe-6aa5f882bdb4 is in state STARTED 2025-08-29 19:38:01.028902 | orchestrator | 2025-08-29 19:38:01 | INFO  | Task 6913cdcc-e289-4c14-8b74-3fa4d1f0cf93 is in state STARTED 2025-08-29 19:38:01.030720 | orchestrator | 2025-08-29 19:38:01 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:38:01.032321 | orchestrator | 2025-08-29 19:38:01 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:38:01.032338 | orchestrator | 2025-08-29 19:38:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:38:04.117896 | orchestrator | 2025-08-29 19:38:04 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:38:04.118456 | orchestrator | 2025-08-29 19:38:04 | INFO  | Task bb786318-d262-4ca1-92fe-6aa5f882bdb4 is in state STARTED 2025-08-29 19:38:04.119597 | orchestrator | 2025-08-29 19:38:04 | INFO  | Task 6913cdcc-e289-4c14-8b74-3fa4d1f0cf93 is in state STARTED 2025-08-29 19:38:04.120184 | orchestrator | 2025-08-29 19:38:04 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:38:04.120979 | orchestrator | 2025-08-29 19:38:04 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:38:04.121010 | orchestrator | 2025-08-29 19:38:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:38:07.250919 | orchestrator | 2025-08-29 19:38:07 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:38:07.251025 | orchestrator | 2025-08-29 19:38:07 | INFO  | Task bb786318-d262-4ca1-92fe-6aa5f882bdb4 is in state STARTED 2025-08-29 19:38:07.251040 | orchestrator | 2025-08-29 19:38:07 | INFO  | Task 6913cdcc-e289-4c14-8b74-3fa4d1f0cf93 is in state STARTED 2025-08-29 19:38:07.251052 | orchestrator | 2025-08-29 19:38:07 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:38:07.251063 | orchestrator | 2025-08-29 19:38:07 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:38:07.251074 | orchestrator | 2025-08-29 19:38:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:38:10.174648 | orchestrator | 2025-08-29 19:38:10 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:38:10.174761 | orchestrator | 2025-08-29 19:38:10 | INFO  | Task bb786318-d262-4ca1-92fe-6aa5f882bdb4 is in state STARTED 2025-08-29 19:38:10.175087 | orchestrator | 2025-08-29 19:38:10 | INFO  | Task 6913cdcc-e289-4c14-8b74-3fa4d1f0cf93 is in state STARTED 2025-08-29 19:38:10.175590 | orchestrator | 2025-08-29 19:38:10 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:38:10.177143 | orchestrator | 2025-08-29 19:38:10 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:38:10.177193 | orchestrator | 2025-08-29 19:38:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:38:13.206333 | orchestrator | 2025-08-29 19:38:13 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:38:13.207181 | orchestrator | 2025-08-29 19:38:13 | INFO  | Task bb786318-d262-4ca1-92fe-6aa5f882bdb4 is in state STARTED 2025-08-29 19:38:13.208620 | orchestrator | 2025-08-29 19:38:13 | INFO  | Task 6913cdcc-e289-4c14-8b74-3fa4d1f0cf93 is in state STARTED 2025-08-29 19:38:13.210434 | orchestrator | 2025-08-29 19:38:13 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:38:13.212636 | orchestrator | 2025-08-29 19:38:13 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:38:13.213016 | orchestrator | 2025-08-29 19:38:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:38:16.242590 | orchestrator | 2025-08-29 19:38:16 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:38:16.242693 | orchestrator | 2025-08-29 19:38:16 | INFO  | Task bb786318-d262-4ca1-92fe-6aa5f882bdb4 is in state STARTED 2025-08-29 19:38:16.243395 | orchestrator | 2025-08-29 19:38:16 | INFO  | Task 6913cdcc-e289-4c14-8b74-3fa4d1f0cf93 is in state STARTED 2025-08-29 19:38:16.243801 | orchestrator | 2025-08-29 19:38:16 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:38:16.244629 | orchestrator | 2025-08-29 19:38:16 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:38:16.244674 | orchestrator | 2025-08-29 19:38:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:38:19.275778 | orchestrator | 2025-08-29 19:38:19 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:38:19.276414 | orchestrator | 2025-08-29 19:38:19 | INFO  | Task bb786318-d262-4ca1-92fe-6aa5f882bdb4 is in state STARTED 2025-08-29 19:38:19.278142 | orchestrator | 2025-08-29 19:38:19 | INFO  | Task 6913cdcc-e289-4c14-8b74-3fa4d1f0cf93 is in state STARTED 2025-08-29 19:38:19.279643 | orchestrator | 2025-08-29 19:38:19 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:38:19.280967 | orchestrator | 2025-08-29 19:38:19 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:38:19.281029 | orchestrator | 2025-08-29 19:38:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:38:22.338621 | orchestrator | 2025-08-29 19:38:22 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:38:22.338712 | orchestrator | 2025-08-29 19:38:22 | INFO  | Task bb786318-d262-4ca1-92fe-6aa5f882bdb4 is in state STARTED 2025-08-29 19:38:22.338735 | orchestrator | 2025-08-29 19:38:22 | INFO  | Task 6913cdcc-e289-4c14-8b74-3fa4d1f0cf93 is in state STARTED 2025-08-29 19:38:22.338755 | orchestrator | 2025-08-29 19:38:22 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:38:22.338774 | orchestrator | 2025-08-29 19:38:22 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:38:22.338793 | orchestrator | 2025-08-29 19:38:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:38:25.353738 | orchestrator | 2025-08-29 19:38:25 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:38:25.353839 | orchestrator | 2025-08-29 19:38:25 | INFO  | Task bb786318-d262-4ca1-92fe-6aa5f882bdb4 is in state STARTED 2025-08-29 19:38:25.354528 | orchestrator | 2025-08-29 19:38:25 | INFO  | Task 6913cdcc-e289-4c14-8b74-3fa4d1f0cf93 is in state STARTED 2025-08-29 19:38:25.355089 | orchestrator | 2025-08-29 19:38:25 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:38:25.355668 | orchestrator | 2025-08-29 19:38:25 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:38:25.355745 | orchestrator | 2025-08-29 19:38:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:38:28.376394 | orchestrator | 2025-08-29 19:38:28 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:38:28.376604 | orchestrator | 2025-08-29 19:38:28 | INFO  | Task bb786318-d262-4ca1-92fe-6aa5f882bdb4 is in state STARTED 2025-08-29 19:38:28.377352 | orchestrator | 2025-08-29 19:38:28 | INFO  | Task 6913cdcc-e289-4c14-8b74-3fa4d1f0cf93 is in state STARTED 2025-08-29 19:38:28.378789 | orchestrator | 2025-08-29 19:38:28 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:38:28.380505 | orchestrator | 2025-08-29 19:38:28 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:38:28.380529 | orchestrator | 2025-08-29 19:38:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:38:31.407017 | orchestrator | 2025-08-29 19:38:31 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:38:31.410245 | orchestrator | 2025-08-29 19:38:31 | INFO  | Task bb786318-d262-4ca1-92fe-6aa5f882bdb4 is in state STARTED 2025-08-29 19:38:31.410687 | orchestrator | 2025-08-29 19:38:31 | INFO  | Task 6913cdcc-e289-4c14-8b74-3fa4d1f0cf93 is in state STARTED 2025-08-29 19:38:31.411655 | orchestrator | 2025-08-29 19:38:31 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:38:31.412461 | orchestrator | 2025-08-29 19:38:31 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:38:31.412839 | orchestrator | 2025-08-29 19:38:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:38:34.438662 | orchestrator | 2025-08-29 19:38:34 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:38:34.438772 | orchestrator | 2025-08-29 19:38:34 | INFO  | Task bb786318-d262-4ca1-92fe-6aa5f882bdb4 is in state STARTED 2025-08-29 19:38:34.439296 | orchestrator | 2025-08-29 19:38:34 | INFO  | Task 6913cdcc-e289-4c14-8b74-3fa4d1f0cf93 is in state STARTED 2025-08-29 19:38:34.440661 | orchestrator | 2025-08-29 19:38:34 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:38:34.441116 | orchestrator | 2025-08-29 19:38:34 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:38:34.441146 | orchestrator | 2025-08-29 19:38:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:38:37.482717 | orchestrator | 2025-08-29 19:38:37 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:38:37.483503 | orchestrator | 2025-08-29 19:38:37 | INFO  | Task bb786318-d262-4ca1-92fe-6aa5f882bdb4 is in state STARTED 2025-08-29 19:38:37.484199 | orchestrator | 2025-08-29 19:38:37 | INFO  | Task 6913cdcc-e289-4c14-8b74-3fa4d1f0cf93 is in state STARTED 2025-08-29 19:38:37.484754 | orchestrator | 2025-08-29 19:38:37 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:38:37.485339 | orchestrator | 2025-08-29 19:38:37 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:38:37.485350 | orchestrator | 2025-08-29 19:38:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:38:40.509309 | orchestrator | 2025-08-29 19:38:40 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:38:40.510518 | orchestrator | 2025-08-29 19:38:40 | INFO  | Task bb786318-d262-4ca1-92fe-6aa5f882bdb4 is in state STARTED 2025-08-29 19:38:40.511091 | orchestrator | 2025-08-29 19:38:40 | INFO  | Task 6913cdcc-e289-4c14-8b74-3fa4d1f0cf93 is in state STARTED 2025-08-29 19:38:40.511774 | orchestrator | 2025-08-29 19:38:40 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:38:40.513410 | orchestrator | 2025-08-29 19:38:40 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:38:40.513448 | orchestrator | 2025-08-29 19:38:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:38:43.534538 | orchestrator | 2025-08-29 19:38:43 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:38:43.534660 | orchestrator | 2025-08-29 19:38:43 | INFO  | Task bb786318-d262-4ca1-92fe-6aa5f882bdb4 is in state STARTED 2025-08-29 19:38:43.535321 | orchestrator | 2025-08-29 19:38:43 | INFO  | Task 6913cdcc-e289-4c14-8b74-3fa4d1f0cf93 is in state STARTED 2025-08-29 19:38:43.536672 | orchestrator | 2025-08-29 19:38:43 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:38:43.537167 | orchestrator | 2025-08-29 19:38:43 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:38:43.537195 | orchestrator | 2025-08-29 19:38:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:38:46.558928 | orchestrator | 2025-08-29 19:38:46 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:38:46.559074 | orchestrator | 2025-08-29 19:38:46 | INFO  | Task bb786318-d262-4ca1-92fe-6aa5f882bdb4 is in state STARTED 2025-08-29 19:38:46.559385 | orchestrator | 2025-08-29 19:38:46 | INFO  | Task 6913cdcc-e289-4c14-8b74-3fa4d1f0cf93 is in state STARTED 2025-08-29 19:38:46.560037 | orchestrator | 2025-08-29 19:38:46 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:38:46.560366 | orchestrator | 2025-08-29 19:38:46 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:38:46.560389 | orchestrator | 2025-08-29 19:38:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:38:49.604782 | orchestrator | 2025-08-29 19:38:49 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:38:49.605699 | orchestrator | 2025-08-29 19:38:49 | INFO  | Task bb786318-d262-4ca1-92fe-6aa5f882bdb4 is in state STARTED 2025-08-29 19:38:49.606461 | orchestrator | 2025-08-29 19:38:49 | INFO  | Task 6913cdcc-e289-4c14-8b74-3fa4d1f0cf93 is in state STARTED 2025-08-29 19:38:49.606999 | orchestrator | 2025-08-29 19:38:49 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:38:49.607460 | orchestrator | 2025-08-29 19:38:49 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:38:49.607564 | orchestrator | 2025-08-29 19:38:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:38:52.634690 | orchestrator | 2025-08-29 19:38:52 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:38:52.634812 | orchestrator | 2025-08-29 19:38:52 | INFO  | Task bb786318-d262-4ca1-92fe-6aa5f882bdb4 is in state STARTED 2025-08-29 19:38:52.635220 | orchestrator | 2025-08-29 19:38:52 | INFO  | Task 6913cdcc-e289-4c14-8b74-3fa4d1f0cf93 is in state STARTED 2025-08-29 19:38:52.635777 | orchestrator | 2025-08-29 19:38:52 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:38:52.636793 | orchestrator | 2025-08-29 19:38:52 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:38:52.636836 | orchestrator | 2025-08-29 19:38:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:38:55.656120 | orchestrator | 2025-08-29 19:38:55 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:38:55.656317 | orchestrator | 2025-08-29 19:38:55 | INFO  | Task bb786318-d262-4ca1-92fe-6aa5f882bdb4 is in state STARTED 2025-08-29 19:38:55.657054 | orchestrator | 2025-08-29 19:38:55 | INFO  | Task 6913cdcc-e289-4c14-8b74-3fa4d1f0cf93 is in state STARTED 2025-08-29 19:38:55.657668 | orchestrator | 2025-08-29 19:38:55 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:38:55.658482 | orchestrator | 2025-08-29 19:38:55 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:38:55.658507 | orchestrator | 2025-08-29 19:38:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:38:58.685192 | orchestrator | 2025-08-29 19:38:58 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:38:58.685584 | orchestrator | 2025-08-29 19:38:58 | INFO  | Task bb786318-d262-4ca1-92fe-6aa5f882bdb4 is in state SUCCESS 2025-08-29 19:38:58.686247 | orchestrator | 2025-08-29 19:38:58 | INFO  | Task 6913cdcc-e289-4c14-8b74-3fa4d1f0cf93 is in state STARTED 2025-08-29 19:38:58.686931 | orchestrator | 2025-08-29 19:38:58 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:38:58.687650 | orchestrator | 2025-08-29 19:38:58 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:38:58.687683 | orchestrator | 2025-08-29 19:38:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:39:01.716724 | orchestrator | 2025-08-29 19:39:01 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:39:01.717762 | orchestrator | 2025-08-29 19:39:01 | INFO  | Task 6913cdcc-e289-4c14-8b74-3fa4d1f0cf93 is in state STARTED 2025-08-29 19:39:01.717799 | orchestrator | 2025-08-29 19:39:01 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:39:01.718568 | orchestrator | 2025-08-29 19:39:01 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:39:01.718603 | orchestrator | 2025-08-29 19:39:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:39:04.738523 | orchestrator | 2025-08-29 19:39:04 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:39:04.738759 | orchestrator | 2025-08-29 19:39:04 | INFO  | Task 6913cdcc-e289-4c14-8b74-3fa4d1f0cf93 is in state STARTED 2025-08-29 19:39:04.739307 | orchestrator | 2025-08-29 19:39:04 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:39:04.740066 | orchestrator | 2025-08-29 19:39:04 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:39:04.740087 | orchestrator | 2025-08-29 19:39:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:39:07.763965 | orchestrator | 2025-08-29 19:39:07 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:39:07.764067 | orchestrator | 2025-08-29 19:39:07 | INFO  | Task 6913cdcc-e289-4c14-8b74-3fa4d1f0cf93 is in state STARTED 2025-08-29 19:39:07.764443 | orchestrator | 2025-08-29 19:39:07 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:39:07.765162 | orchestrator | 2025-08-29 19:39:07 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:39:07.765237 | orchestrator | 2025-08-29 19:39:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:39:10.807103 | orchestrator | 2025-08-29 19:39:10 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:39:10.807377 | orchestrator | 2025-08-29 19:39:10 | INFO  | Task 6913cdcc-e289-4c14-8b74-3fa4d1f0cf93 is in state STARTED 2025-08-29 19:39:10.809224 | orchestrator | 2025-08-29 19:39:10 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:39:10.810196 | orchestrator | 2025-08-29 19:39:10 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:39:10.810224 | orchestrator | 2025-08-29 19:39:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:39:13.828450 | orchestrator | 2025-08-29 19:39:13 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:39:13.828608 | orchestrator | 2025-08-29 19:39:13 | INFO  | Task 6913cdcc-e289-4c14-8b74-3fa4d1f0cf93 is in state STARTED 2025-08-29 19:39:13.829139 | orchestrator | 2025-08-29 19:39:13 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:39:13.829949 | orchestrator | 2025-08-29 19:39:13 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:39:13.829977 | orchestrator | 2025-08-29 19:39:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:39:16.850687 | orchestrator | 2025-08-29 19:39:16 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:39:16.850865 | orchestrator | 2025-08-29 19:39:16 | INFO  | Task 6913cdcc-e289-4c14-8b74-3fa4d1f0cf93 is in state STARTED 2025-08-29 19:39:16.851224 | orchestrator | 2025-08-29 19:39:16 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:39:16.851799 | orchestrator | 2025-08-29 19:39:16 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:39:16.852171 | orchestrator | 2025-08-29 19:39:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:39:19.874990 | orchestrator | 2025-08-29 19:39:19 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:39:19.875097 | orchestrator | 2025-08-29 19:39:19 | INFO  | Task 6913cdcc-e289-4c14-8b74-3fa4d1f0cf93 is in state STARTED 2025-08-29 19:39:19.877981 | orchestrator | 2025-08-29 19:39:19 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:39:19.878162 | orchestrator | 2025-08-29 19:39:19 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:39:19.878186 | orchestrator | 2025-08-29 19:39:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:39:22.908701 | orchestrator | 2025-08-29 19:39:22 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:39:22.908808 | orchestrator | 2025-08-29 19:39:22 | INFO  | Task 6913cdcc-e289-4c14-8b74-3fa4d1f0cf93 is in state STARTED 2025-08-29 19:39:22.908823 | orchestrator | 2025-08-29 19:39:22 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:39:22.908836 | orchestrator | 2025-08-29 19:39:22 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:39:22.908847 | orchestrator | 2025-08-29 19:39:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:39:25.928273 | orchestrator | 2025-08-29 19:39:25 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:39:25.928397 | orchestrator | 2025-08-29 19:39:25 | INFO  | Task 6913cdcc-e289-4c14-8b74-3fa4d1f0cf93 is in state STARTED 2025-08-29 19:39:25.929023 | orchestrator | 2025-08-29 19:39:25 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:39:25.929654 | orchestrator | 2025-08-29 19:39:25 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:39:25.929689 | orchestrator | 2025-08-29 19:39:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:39:28.950183 | orchestrator | 2025-08-29 19:39:28 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:39:28.950284 | orchestrator | 2025-08-29 19:39:28 | INFO  | Task 6913cdcc-e289-4c14-8b74-3fa4d1f0cf93 is in state STARTED 2025-08-29 19:39:28.951660 | orchestrator | 2025-08-29 19:39:28 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:39:28.952038 | orchestrator | 2025-08-29 19:39:28 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:39:28.952072 | orchestrator | 2025-08-29 19:39:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:39:31.975067 | orchestrator | 2025-08-29 19:39:31 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:39:31.975704 | orchestrator | 2025-08-29 19:39:31 | INFO  | Task 6913cdcc-e289-4c14-8b74-3fa4d1f0cf93 is in state STARTED 2025-08-29 19:39:31.976735 | orchestrator | 2025-08-29 19:39:31 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:39:31.978104 | orchestrator | 2025-08-29 19:39:31 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:39:31.978136 | orchestrator | 2025-08-29 19:39:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:39:35.008089 | orchestrator | 2025-08-29 19:39:35 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:39:35.008195 | orchestrator | 2025-08-29 19:39:35 | INFO  | Task ada1af86-0924-4fb3-86d3-fd7db71e3fa2 is in state STARTED 2025-08-29 19:39:35.012454 | orchestrator | 2025-08-29 19:39:35 | INFO  | Task 6913cdcc-e289-4c14-8b74-3fa4d1f0cf93 is in state SUCCESS 2025-08-29 19:39:35.016584 | orchestrator | 2025-08-29 19:39:35.016657 | orchestrator | 2025-08-29 19:39:35.016693 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 19:39:35.016703 | orchestrator | 2025-08-29 19:39:35.016724 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 19:39:35.016739 | orchestrator | Friday 29 August 2025 19:37:35 +0000 (0:00:00.214) 0:00:00.214 ********* 2025-08-29 19:39:35.016759 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:39:35.016775 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:39:35.016788 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:39:35.016802 | orchestrator | 2025-08-29 19:39:35.016817 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 19:39:35.016830 | orchestrator | Friday 29 August 2025 19:37:35 +0000 (0:00:00.394) 0:00:00.608 ********* 2025-08-29 19:39:35.016839 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-08-29 19:39:35.016847 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-08-29 19:39:35.016855 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-08-29 19:39:35.016942 | orchestrator | 2025-08-29 19:39:35.016952 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-08-29 19:39:35.016960 | orchestrator | 2025-08-29 19:39:35.016968 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-08-29 19:39:35.016976 | orchestrator | Friday 29 August 2025 19:37:36 +0000 (0:00:01.066) 0:00:01.675 ********* 2025-08-29 19:39:35.016984 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:39:35.016992 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:39:35.017000 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:39:35.017007 | orchestrator | 2025-08-29 19:39:35.017015 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:39:35.017024 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:39:35.017033 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:39:35.017041 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:39:35.017048 | orchestrator | 2025-08-29 19:39:35.017056 | orchestrator | 2025-08-29 19:39:35.017064 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:39:35.017072 | orchestrator | Friday 29 August 2025 19:37:38 +0000 (0:00:01.089) 0:00:02.764 ********* 2025-08-29 19:39:35.017082 | orchestrator | =============================================================================== 2025-08-29 19:39:35.017090 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 1.09s 2025-08-29 19:39:35.017099 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.07s 2025-08-29 19:39:35.017108 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.39s 2025-08-29 19:39:35.017117 | orchestrator | 2025-08-29 19:39:35.017126 | orchestrator | 2025-08-29 19:39:35.017135 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-08-29 19:39:35.017144 | orchestrator | 2025-08-29 19:39:35.017153 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-08-29 19:39:35.017162 | orchestrator | Friday 29 August 2025 19:37:35 +0000 (0:00:00.280) 0:00:00.280 ********* 2025-08-29 19:39:35.017171 | orchestrator | changed: [testbed-manager] 2025-08-29 19:39:35.017180 | orchestrator | 2025-08-29 19:39:35.017189 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-08-29 19:39:35.017213 | orchestrator | Friday 29 August 2025 19:37:37 +0000 (0:00:01.740) 0:00:02.020 ********* 2025-08-29 19:39:35.017223 | orchestrator | changed: [testbed-manager] 2025-08-29 19:39:35.017232 | orchestrator | 2025-08-29 19:39:35.017241 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-08-29 19:39:35.017251 | orchestrator | Friday 29 August 2025 19:37:38 +0000 (0:00:01.111) 0:00:03.132 ********* 2025-08-29 19:39:35.017260 | orchestrator | changed: [testbed-manager] 2025-08-29 19:39:35.017269 | orchestrator | 2025-08-29 19:39:35.017278 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-08-29 19:39:35.017286 | orchestrator | Friday 29 August 2025 19:37:39 +0000 (0:00:00.942) 0:00:04.075 ********* 2025-08-29 19:39:35.017294 | orchestrator | changed: [testbed-manager] 2025-08-29 19:39:35.017301 | orchestrator | 2025-08-29 19:39:35.017309 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-08-29 19:39:35.017317 | orchestrator | Friday 29 August 2025 19:37:40 +0000 (0:00:01.041) 0:00:05.117 ********* 2025-08-29 19:39:35.017325 | orchestrator | changed: [testbed-manager] 2025-08-29 19:39:35.017332 | orchestrator | 2025-08-29 19:39:35.017340 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-08-29 19:39:35.017348 | orchestrator | Friday 29 August 2025 19:37:41 +0000 (0:00:01.153) 0:00:06.270 ********* 2025-08-29 19:39:35.017355 | orchestrator | changed: [testbed-manager] 2025-08-29 19:39:35.017363 | orchestrator | 2025-08-29 19:39:35.017371 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-08-29 19:39:35.017379 | orchestrator | Friday 29 August 2025 19:37:42 +0000 (0:00:01.157) 0:00:07.428 ********* 2025-08-29 19:39:35.017386 | orchestrator | changed: [testbed-manager] 2025-08-29 19:39:35.017394 | orchestrator | 2025-08-29 19:39:35.017402 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-08-29 19:39:35.017410 | orchestrator | Friday 29 August 2025 19:37:44 +0000 (0:00:01.154) 0:00:08.582 ********* 2025-08-29 19:39:35.017417 | orchestrator | changed: [testbed-manager] 2025-08-29 19:39:35.017425 | orchestrator | 2025-08-29 19:39:35.017433 | orchestrator | TASK [Create admin user] ******************************************************* 2025-08-29 19:39:35.017441 | orchestrator | Friday 29 August 2025 19:37:45 +0000 (0:00:01.007) 0:00:09.590 ********* 2025-08-29 19:39:35.017448 | orchestrator | changed: [testbed-manager] 2025-08-29 19:39:35.017456 | orchestrator | 2025-08-29 19:39:35.017464 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-08-29 19:39:35.017472 | orchestrator | Friday 29 August 2025 19:38:33 +0000 (0:00:48.519) 0:00:58.109 ********* 2025-08-29 19:39:35.017495 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:39:35.017509 | orchestrator | 2025-08-29 19:39:35.017549 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-08-29 19:39:35.017564 | orchestrator | 2025-08-29 19:39:35.017585 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-08-29 19:39:35.017599 | orchestrator | Friday 29 August 2025 19:38:33 +0000 (0:00:00.147) 0:00:58.256 ********* 2025-08-29 19:39:35.017613 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:39:35.017627 | orchestrator | 2025-08-29 19:39:35.017641 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-08-29 19:39:35.017650 | orchestrator | 2025-08-29 19:39:35.017658 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-08-29 19:39:35.017666 | orchestrator | Friday 29 August 2025 19:38:35 +0000 (0:00:01.593) 0:00:59.850 ********* 2025-08-29 19:39:35.017674 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:39:35.017681 | orchestrator | 2025-08-29 19:39:35.017689 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-08-29 19:39:35.017697 | orchestrator | 2025-08-29 19:39:35.017705 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-08-29 19:39:35.017713 | orchestrator | Friday 29 August 2025 19:38:46 +0000 (0:00:11.321) 0:01:11.172 ********* 2025-08-29 19:39:35.017720 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:39:35.017744 | orchestrator | 2025-08-29 19:39:35.017764 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:39:35.017778 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 19:39:35.017794 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:39:35.017808 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:39:35.017821 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:39:35.017830 | orchestrator | 2025-08-29 19:39:35.017838 | orchestrator | 2025-08-29 19:39:35.017845 | orchestrator | 2025-08-29 19:39:35.017853 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:39:35.017861 | orchestrator | Friday 29 August 2025 19:38:57 +0000 (0:00:11.155) 0:01:22.327 ********* 2025-08-29 19:39:35.017869 | orchestrator | =============================================================================== 2025-08-29 19:39:35.017877 | orchestrator | Create admin user ------------------------------------------------------ 48.52s 2025-08-29 19:39:35.017885 | orchestrator | Restart ceph manager service ------------------------------------------- 24.07s 2025-08-29 19:39:35.017893 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.74s 2025-08-29 19:39:35.017900 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.16s 2025-08-29 19:39:35.017908 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.15s 2025-08-29 19:39:35.017916 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.15s 2025-08-29 19:39:35.017924 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.11s 2025-08-29 19:39:35.017932 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.04s 2025-08-29 19:39:35.017939 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.01s 2025-08-29 19:39:35.017947 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.94s 2025-08-29 19:39:35.017955 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.15s 2025-08-29 19:39:35.017963 | orchestrator | 2025-08-29 19:39:35.017989 | orchestrator | 2025-08-29 19:39:35.017997 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 19:39:35.018005 | orchestrator | 2025-08-29 19:39:35.018083 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 19:39:35.018093 | orchestrator | Friday 29 August 2025 19:37:42 +0000 (0:00:00.778) 0:00:00.778 ********* 2025-08-29 19:39:35.018101 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:39:35.018109 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:39:35.018117 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:39:35.018124 | orchestrator | 2025-08-29 19:39:35.018132 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 19:39:35.018140 | orchestrator | Friday 29 August 2025 19:37:42 +0000 (0:00:00.565) 0:00:01.344 ********* 2025-08-29 19:39:35.018148 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-08-29 19:39:35.018156 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-08-29 19:39:35.018164 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-08-29 19:39:35.018171 | orchestrator | 2025-08-29 19:39:35.018179 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-08-29 19:39:35.018187 | orchestrator | 2025-08-29 19:39:35.018195 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-08-29 19:39:35.018203 | orchestrator | Friday 29 August 2025 19:37:43 +0000 (0:00:00.448) 0:00:01.793 ********* 2025-08-29 19:39:35.018211 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:39:35.018227 | orchestrator | 2025-08-29 19:39:35.018235 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-08-29 19:39:35.018242 | orchestrator | Friday 29 August 2025 19:37:43 +0000 (0:00:00.557) 0:00:02.350 ********* 2025-08-29 19:39:35.018250 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-08-29 19:39:35.018258 | orchestrator | 2025-08-29 19:39:35.018266 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-08-29 19:39:35.018282 | orchestrator | Friday 29 August 2025 19:37:47 +0000 (0:00:03.559) 0:00:05.910 ********* 2025-08-29 19:39:35.018294 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-08-29 19:39:35.018303 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-08-29 19:39:35.018311 | orchestrator | 2025-08-29 19:39:35.018318 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-08-29 19:39:35.018326 | orchestrator | Friday 29 August 2025 19:37:53 +0000 (0:00:06.039) 0:00:11.949 ********* 2025-08-29 19:39:35.018334 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-08-29 19:39:35.018341 | orchestrator | 2025-08-29 19:39:35.018349 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-08-29 19:39:35.018357 | orchestrator | Friday 29 August 2025 19:37:56 +0000 (0:00:03.218) 0:00:15.168 ********* 2025-08-29 19:39:35.018364 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 19:39:35.018372 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-08-29 19:39:35.018380 | orchestrator | 2025-08-29 19:39:35.018388 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-08-29 19:39:35.018396 | orchestrator | Friday 29 August 2025 19:38:00 +0000 (0:00:04.173) 0:00:19.341 ********* 2025-08-29 19:39:35.018404 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 19:39:35.018412 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-08-29 19:39:35.018419 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-08-29 19:39:35.018427 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-08-29 19:39:35.018435 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-08-29 19:39:35.018443 | orchestrator | 2025-08-29 19:39:35.018451 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-08-29 19:39:35.018458 | orchestrator | Friday 29 August 2025 19:38:16 +0000 (0:00:15.797) 0:00:35.140 ********* 2025-08-29 19:39:35.018466 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-08-29 19:39:35.018474 | orchestrator | 2025-08-29 19:39:35.018482 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-08-29 19:39:35.018491 | orchestrator | Friday 29 August 2025 19:38:20 +0000 (0:00:03.872) 0:00:39.013 ********* 2025-08-29 19:39:35.018511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 19:39:35.018552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 19:39:35.018594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 19:39:35.018608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 19:39:35.018618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 19:39:35.018627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 19:39:35.018635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 19:39:35.018650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 19:39:35.018658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 19:39:35.018667 | orchestrator | 2025-08-29 19:39:35.018675 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-08-29 19:39:35.018687 | orchestrator | Friday 29 August 2025 19:38:22 +0000 (0:00:02.513) 0:00:41.526 ********* 2025-08-29 19:39:35.018699 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-08-29 19:39:35.018707 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-08-29 19:39:35.018715 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-08-29 19:39:35.018723 | orchestrator | 2025-08-29 19:39:35.018736 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-08-29 19:39:35.018756 | orchestrator | Friday 29 August 2025 19:38:24 +0000 (0:00:01.657) 0:00:43.183 ********* 2025-08-29 19:39:35.018771 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:39:35.018785 | orchestrator | 2025-08-29 19:39:35.018799 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-08-29 19:39:35.018812 | orchestrator | Friday 29 August 2025 19:38:24 +0000 (0:00:00.279) 0:00:43.463 ********* 2025-08-29 19:39:35.018822 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:39:35.018830 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:39:35.018838 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:39:35.018846 | orchestrator | 2025-08-29 19:39:35.018854 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-08-29 19:39:35.018862 | orchestrator | Friday 29 August 2025 19:38:25 +0000 (0:00:01.149) 0:00:44.612 ********* 2025-08-29 19:39:35.018870 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:39:35.018878 | orchestrator | 2025-08-29 19:39:35.018886 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-08-29 19:39:35.018894 | orchestrator | Friday 29 August 2025 19:38:26 +0000 (0:00:00.782) 0:00:45.395 ********* 2025-08-29 19:39:35.018903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 19:39:35.018919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 19:39:35.018927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 19:39:35.018948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 19:39:35.018957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 19:39:35.018965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 19:39:35.018981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 19:39:35.018990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 19:39:35.018998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 19:39:35.019006 | orchestrator | 2025-08-29 19:39:35.019014 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-08-29 19:39:35.019023 | orchestrator | Friday 29 August 2025 19:38:29 +0000 (0:00:03.025) 0:00:48.420 ********* 2025-08-29 19:39:35.019044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 19:39:35.019054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 19:39:35.019063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 19:39:35.019077 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:39:35.019085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 19:39:35.019094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 19:39:35.019103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 19:39:35.019111 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:39:35.019128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 19:39:35.019137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 19:39:35.019151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 19:39:35.019159 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:39:35.019167 | orchestrator | 2025-08-29 19:39:35.019175 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-08-29 19:39:35.019184 | orchestrator | Friday 29 August 2025 19:38:32 +0000 (0:00:02.365) 0:00:50.786 ********* 2025-08-29 19:39:35.019192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 19:39:35.019201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 19:39:35.019219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 19:39:35.019227 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:39:35.019236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 19:39:35.019249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 19:39:35.019257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 19:39:35.019266 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:39:35.019274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 19:39:35.019291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 19:39:35.019300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 19:39:35.019312 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:39:35.019320 | orchestrator | 2025-08-29 19:39:35.019329 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-08-29 19:39:35.019337 | orchestrator | Friday 29 August 2025 19:38:34 +0000 (0:00:02.320) 0:00:53.107 ********* 2025-08-29 19:39:35.019345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 19:39:35.019354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 19:39:35.019362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 19:39:35.019380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 19:39:35.019389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 19:39:35.019402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 19:39:35.019410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 19:39:35.019419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 19:39:35.019427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 19:39:35.019435 | orchestrator | 2025-08-29 19:39:35.019444 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-08-29 19:39:35.019452 | orchestrator | Friday 29 August 2025 19:38:38 +0000 (0:00:04.495) 0:00:57.602 ********* 2025-08-29 19:39:35.019460 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:39:35.019469 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:39:35.019477 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:39:35.019484 | orchestrator | 2025-08-29 19:39:35.019496 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-08-29 19:39:35.019568 | orchestrator | Friday 29 August 2025 19:38:41 +0000 (0:00:02.832) 0:01:00.435 ********* 2025-08-29 19:39:35.019590 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 19:39:35.019604 | orchestrator | 2025-08-29 19:39:35.019618 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-08-29 19:39:35.019627 | orchestrator | Friday 29 August 2025 19:38:43 +0000 (0:00:02.103) 0:01:02.538 ********* 2025-08-29 19:39:35.019635 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:39:35.019643 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:39:35.019659 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:39:35.019667 | orchestrator | 2025-08-29 19:39:35.019682 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-08-29 19:39:35.019695 | orchestrator | Friday 29 August 2025 19:38:44 +0000 (0:00:00.655) 0:01:03.194 ********* 2025-08-29 19:39:35.019704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 19:39:35.019713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 19:39:35.019721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 19:39:35.019734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 19:39:35.019762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 19:39:35.019795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 19:39:35.019810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 19:39:35.019822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 19:39:35.019830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 19:39:35.019839 | orchestrator | 2025-08-29 19:39:35.019846 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-08-29 19:39:35.019854 | orchestrator | Friday 29 August 2025 19:38:54 +0000 (0:00:10.012) 0:01:13.206 ********* 2025-08-29 19:39:35.019862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 19:39:35.019888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 19:39:35.019897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 19:39:35.019905 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:39:35.019913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 19:39:35.019922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 19:39:35.019930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 19:39:35.019938 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:39:35.019946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 19:39:35.019967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 19:39:35.019977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 19:39:35.019985 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:39:35.019993 | orchestrator | 2025-08-29 19:39:35.020001 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-08-29 19:39:35.020009 | orchestrator | Friday 29 August 2025 19:38:55 +0000 (0:00:01.056) 0:01:14.263 ********* 2025-08-29 19:39:35.020017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 19:39:35.020026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 19:39:35.020039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 19:39:35.020056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 19:39:35.020065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 19:39:35.020073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 19:39:35.020081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 19:39:35.020089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 19:39:35.020100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 19:39:35.020107 | orchestrator | 2025-08-29 19:39:35.020114 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-08-29 19:39:35.020121 | orchestrator | Friday 29 August 2025 19:38:59 +0000 (0:00:04.025) 0:01:18.289 ********* 2025-08-29 19:39:35.020128 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:39:35.020134 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:39:35.020141 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:39:35.020148 | orchestrator | 2025-08-29 19:39:35.020154 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-08-29 19:39:35.020165 | orchestrator | Friday 29 August 2025 19:38:59 +0000 (0:00:00.231) 0:01:18.521 ********* 2025-08-29 19:39:35.020172 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:39:35.020178 | orchestrator | 2025-08-29 19:39:35.020185 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-08-29 19:39:35.020194 | orchestrator | Friday 29 August 2025 19:39:01 +0000 (0:00:02.140) 0:01:20.662 ********* 2025-08-29 19:39:35.020201 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:39:35.020208 | orchestrator | 2025-08-29 19:39:35.020214 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-08-29 19:39:35.020221 | orchestrator | Friday 29 August 2025 19:39:03 +0000 (0:00:01.708) 0:01:22.370 ********* 2025-08-29 19:39:35.020228 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:39:35.020234 | orchestrator | 2025-08-29 19:39:35.020241 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-08-29 19:39:35.020247 | orchestrator | Friday 29 August 2025 19:39:14 +0000 (0:00:10.610) 0:01:32.981 ********* 2025-08-29 19:39:35.020254 | orchestrator | 2025-08-29 19:39:35.020261 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-08-29 19:39:35.020267 | orchestrator | Friday 29 August 2025 19:39:14 +0000 (0:00:00.062) 0:01:33.043 ********* 2025-08-29 19:39:35.020274 | orchestrator | 2025-08-29 19:39:35.020281 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-08-29 19:39:35.020287 | orchestrator | Friday 29 August 2025 19:39:14 +0000 (0:00:00.156) 0:01:33.199 ********* 2025-08-29 19:39:35.020294 | orchestrator | 2025-08-29 19:39:35.020301 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-08-29 19:39:35.020307 | orchestrator | Friday 29 August 2025 19:39:14 +0000 (0:00:00.173) 0:01:33.373 ********* 2025-08-29 19:39:35.020314 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:39:35.020320 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:39:35.020327 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:39:35.020334 | orchestrator | 2025-08-29 19:39:35.020340 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-08-29 19:39:35.020347 | orchestrator | Friday 29 August 2025 19:39:22 +0000 (0:00:07.546) 0:01:40.920 ********* 2025-08-29 19:39:35.020353 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:39:35.020360 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:39:35.020367 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:39:35.020373 | orchestrator | 2025-08-29 19:39:35.020380 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-08-29 19:39:35.020387 | orchestrator | Friday 29 August 2025 19:39:28 +0000 (0:00:05.947) 0:01:46.867 ********* 2025-08-29 19:39:35.020398 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:39:35.020405 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:39:35.020411 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:39:35.020418 | orchestrator | 2025-08-29 19:39:35.020425 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:39:35.020432 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 19:39:35.020439 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 19:39:35.020445 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 19:39:35.020452 | orchestrator | 2025-08-29 19:39:35.020459 | orchestrator | 2025-08-29 19:39:35.020465 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:39:35.020472 | orchestrator | Friday 29 August 2025 19:39:33 +0000 (0:00:05.313) 0:01:52.181 ********* 2025-08-29 19:39:35.020479 | orchestrator | =============================================================================== 2025-08-29 19:39:35.020485 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.80s 2025-08-29 19:39:35.020495 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 10.61s 2025-08-29 19:39:35.020512 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 10.01s 2025-08-29 19:39:35.020542 | orchestrator | barbican : Restart barbican-api container ------------------------------- 7.55s 2025-08-29 19:39:35.020553 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.04s 2025-08-29 19:39:35.020565 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 5.95s 2025-08-29 19:39:35.020577 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 5.31s 2025-08-29 19:39:35.020588 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.50s 2025-08-29 19:39:35.020600 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.17s 2025-08-29 19:39:35.020607 | orchestrator | barbican : Check barbican containers ------------------------------------ 4.03s 2025-08-29 19:39:35.020614 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.87s 2025-08-29 19:39:35.020620 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.56s 2025-08-29 19:39:35.020627 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.22s 2025-08-29 19:39:35.020634 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.03s 2025-08-29 19:39:35.020640 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.83s 2025-08-29 19:39:35.020647 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.51s 2025-08-29 19:39:35.020653 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 2.37s 2025-08-29 19:39:35.020660 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 2.32s 2025-08-29 19:39:35.020673 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.14s 2025-08-29 19:39:35.020680 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 2.10s 2025-08-29 19:39:35.020691 | orchestrator | 2025-08-29 19:39:35 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:39:35.020699 | orchestrator | 2025-08-29 19:39:35 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:39:35.020706 | orchestrator | 2025-08-29 19:39:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:39:38.063621 | orchestrator | 2025-08-29 19:39:38 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:39:38.064647 | orchestrator | 2025-08-29 19:39:38 | INFO  | Task ada1af86-0924-4fb3-86d3-fd7db71e3fa2 is in state STARTED 2025-08-29 19:39:38.066976 | orchestrator | 2025-08-29 19:39:38 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:39:38.069587 | orchestrator | 2025-08-29 19:39:38 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:39:38.069642 | orchestrator | 2025-08-29 19:39:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:39:41.106215 | orchestrator | 2025-08-29 19:39:41 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:39:41.106739 | orchestrator | 2025-08-29 19:39:41 | INFO  | Task ada1af86-0924-4fb3-86d3-fd7db71e3fa2 is in state STARTED 2025-08-29 19:39:41.107320 | orchestrator | 2025-08-29 19:39:41 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:39:41.108408 | orchestrator | 2025-08-29 19:39:41 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:39:41.108437 | orchestrator | 2025-08-29 19:39:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:39:44.137590 | orchestrator | 2025-08-29 19:39:44 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:39:44.138160 | orchestrator | 2025-08-29 19:39:44 | INFO  | Task ada1af86-0924-4fb3-86d3-fd7db71e3fa2 is in state STARTED 2025-08-29 19:39:44.138830 | orchestrator | 2025-08-29 19:39:44 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:39:44.139669 | orchestrator | 2025-08-29 19:39:44 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:39:44.139696 | orchestrator | 2025-08-29 19:39:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:39:47.168323 | orchestrator | 2025-08-29 19:39:47 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:39:47.169380 | orchestrator | 2025-08-29 19:39:47 | INFO  | Task ada1af86-0924-4fb3-86d3-fd7db71e3fa2 is in state STARTED 2025-08-29 19:39:47.169977 | orchestrator | 2025-08-29 19:39:47 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:39:47.170725 | orchestrator | 2025-08-29 19:39:47 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:39:47.170747 | orchestrator | 2025-08-29 19:39:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:39:50.270178 | orchestrator | 2025-08-29 19:39:50 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:39:50.270239 | orchestrator | 2025-08-29 19:39:50 | INFO  | Task ada1af86-0924-4fb3-86d3-fd7db71e3fa2 is in state STARTED 2025-08-29 19:39:50.270246 | orchestrator | 2025-08-29 19:39:50 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:39:50.270251 | orchestrator | 2025-08-29 19:39:50 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:39:50.270256 | orchestrator | 2025-08-29 19:39:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:39:53.244325 | orchestrator | 2025-08-29 19:39:53 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:39:53.244470 | orchestrator | 2025-08-29 19:39:53 | INFO  | Task ada1af86-0924-4fb3-86d3-fd7db71e3fa2 is in state STARTED 2025-08-29 19:39:53.245079 | orchestrator | 2025-08-29 19:39:53 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:39:53.245778 | orchestrator | 2025-08-29 19:39:53 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:39:53.245813 | orchestrator | 2025-08-29 19:39:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:39:56.271082 | orchestrator | 2025-08-29 19:39:56 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:39:56.273647 | orchestrator | 2025-08-29 19:39:56 | INFO  | Task ada1af86-0924-4fb3-86d3-fd7db71e3fa2 is in state STARTED 2025-08-29 19:39:56.274967 | orchestrator | 2025-08-29 19:39:56 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:39:56.276744 | orchestrator | 2025-08-29 19:39:56 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:39:56.277061 | orchestrator | 2025-08-29 19:39:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:39:59.357082 | orchestrator | 2025-08-29 19:39:59 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:39:59.357592 | orchestrator | 2025-08-29 19:39:59 | INFO  | Task ada1af86-0924-4fb3-86d3-fd7db71e3fa2 is in state STARTED 2025-08-29 19:39:59.358232 | orchestrator | 2025-08-29 19:39:59 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:39:59.359048 | orchestrator | 2025-08-29 19:39:59 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:39:59.359084 | orchestrator | 2025-08-29 19:39:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:40:02.387196 | orchestrator | 2025-08-29 19:40:02 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:40:02.387248 | orchestrator | 2025-08-29 19:40:02 | INFO  | Task ada1af86-0924-4fb3-86d3-fd7db71e3fa2 is in state STARTED 2025-08-29 19:40:02.387253 | orchestrator | 2025-08-29 19:40:02 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:40:02.387445 | orchestrator | 2025-08-29 19:40:02 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:40:02.387583 | orchestrator | 2025-08-29 19:40:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:40:05.475156 | orchestrator | 2025-08-29 19:40:05 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:40:05.475753 | orchestrator | 2025-08-29 19:40:05 | INFO  | Task ada1af86-0924-4fb3-86d3-fd7db71e3fa2 is in state STARTED 2025-08-29 19:40:05.476451 | orchestrator | 2025-08-29 19:40:05 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:40:05.477309 | orchestrator | 2025-08-29 19:40:05 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:40:05.477384 | orchestrator | 2025-08-29 19:40:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:40:08.520434 | orchestrator | 2025-08-29 19:40:08 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:40:08.522790 | orchestrator | 2025-08-29 19:40:08 | INFO  | Task ada1af86-0924-4fb3-86d3-fd7db71e3fa2 is in state STARTED 2025-08-29 19:40:08.523363 | orchestrator | 2025-08-29 19:40:08 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:40:08.524291 | orchestrator | 2025-08-29 19:40:08 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:40:08.524448 | orchestrator | 2025-08-29 19:40:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:40:11.580561 | orchestrator | 2025-08-29 19:40:11 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:40:11.581558 | orchestrator | 2025-08-29 19:40:11 | INFO  | Task ada1af86-0924-4fb3-86d3-fd7db71e3fa2 is in state STARTED 2025-08-29 19:40:11.583035 | orchestrator | 2025-08-29 19:40:11 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:40:11.584292 | orchestrator | 2025-08-29 19:40:11 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:40:11.584965 | orchestrator | 2025-08-29 19:40:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:40:14.620866 | orchestrator | 2025-08-29 19:40:14 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:40:14.623286 | orchestrator | 2025-08-29 19:40:14 | INFO  | Task ada1af86-0924-4fb3-86d3-fd7db71e3fa2 is in state STARTED 2025-08-29 19:40:14.624423 | orchestrator | 2025-08-29 19:40:14 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:40:14.626106 | orchestrator | 2025-08-29 19:40:14 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:40:14.626297 | orchestrator | 2025-08-29 19:40:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:40:17.668279 | orchestrator | 2025-08-29 19:40:17 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:40:17.668677 | orchestrator | 2025-08-29 19:40:17 | INFO  | Task ada1af86-0924-4fb3-86d3-fd7db71e3fa2 is in state STARTED 2025-08-29 19:40:17.669867 | orchestrator | 2025-08-29 19:40:17 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:40:17.671073 | orchestrator | 2025-08-29 19:40:17 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:40:17.671153 | orchestrator | 2025-08-29 19:40:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:40:20.703948 | orchestrator | 2025-08-29 19:40:20 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:40:20.705673 | orchestrator | 2025-08-29 19:40:20 | INFO  | Task ada1af86-0924-4fb3-86d3-fd7db71e3fa2 is in state STARTED 2025-08-29 19:40:20.707770 | orchestrator | 2025-08-29 19:40:20 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:40:20.709495 | orchestrator | 2025-08-29 19:40:20 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:40:20.709778 | orchestrator | 2025-08-29 19:40:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:40:23.779176 | orchestrator | 2025-08-29 19:40:23 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:40:23.779807 | orchestrator | 2025-08-29 19:40:23 | INFO  | Task ada1af86-0924-4fb3-86d3-fd7db71e3fa2 is in state SUCCESS 2025-08-29 19:40:23.781283 | orchestrator | 2025-08-29 19:40:23 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:40:23.782850 | orchestrator | 2025-08-29 19:40:23 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:40:23.783114 | orchestrator | 2025-08-29 19:40:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:40:26.819862 | orchestrator | 2025-08-29 19:40:26 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:40:26.824627 | orchestrator | 2025-08-29 19:40:26 | INFO  | Task 7f1a7a42-06da-49fb-8494-924c19290e9d is in state STARTED 2025-08-29 19:40:26.826101 | orchestrator | 2025-08-29 19:40:26 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:40:26.828834 | orchestrator | 2025-08-29 19:40:26 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:40:26.828930 | orchestrator | 2025-08-29 19:40:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:40:29.877656 | orchestrator | 2025-08-29 19:40:29 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:40:29.881857 | orchestrator | 2025-08-29 19:40:29 | INFO  | Task 7f1a7a42-06da-49fb-8494-924c19290e9d is in state STARTED 2025-08-29 19:40:29.884790 | orchestrator | 2025-08-29 19:40:29 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:40:29.887027 | orchestrator | 2025-08-29 19:40:29 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:40:29.887575 | orchestrator | 2025-08-29 19:40:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:40:32.950003 | orchestrator | 2025-08-29 19:40:32 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:40:32.950514 | orchestrator | 2025-08-29 19:40:32 | INFO  | Task 7f1a7a42-06da-49fb-8494-924c19290e9d is in state STARTED 2025-08-29 19:40:32.951829 | orchestrator | 2025-08-29 19:40:32 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:40:32.953034 | orchestrator | 2025-08-29 19:40:32 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:40:32.953061 | orchestrator | 2025-08-29 19:40:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:40:35.987874 | orchestrator | 2025-08-29 19:40:35 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:40:35.988225 | orchestrator | 2025-08-29 19:40:35 | INFO  | Task 7f1a7a42-06da-49fb-8494-924c19290e9d is in state STARTED 2025-08-29 19:40:35.989511 | orchestrator | 2025-08-29 19:40:35 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:40:35.990724 | orchestrator | 2025-08-29 19:40:35 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:40:35.990789 | orchestrator | 2025-08-29 19:40:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:40:39.044947 | orchestrator | 2025-08-29 19:40:39 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:40:39.046810 | orchestrator | 2025-08-29 19:40:39 | INFO  | Task 7f1a7a42-06da-49fb-8494-924c19290e9d is in state STARTED 2025-08-29 19:40:39.049267 | orchestrator | 2025-08-29 19:40:39 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:40:39.050653 | orchestrator | 2025-08-29 19:40:39 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:40:39.050695 | orchestrator | 2025-08-29 19:40:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:40:42.105789 | orchestrator | 2025-08-29 19:40:42 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:40:42.105880 | orchestrator | 2025-08-29 19:40:42 | INFO  | Task 7f1a7a42-06da-49fb-8494-924c19290e9d is in state STARTED 2025-08-29 19:40:42.105894 | orchestrator | 2025-08-29 19:40:42 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:40:42.105905 | orchestrator | 2025-08-29 19:40:42 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:40:42.105915 | orchestrator | 2025-08-29 19:40:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:40:45.139720 | orchestrator | 2025-08-29 19:40:45 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:40:45.142054 | orchestrator | 2025-08-29 19:40:45 | INFO  | Task 7f1a7a42-06da-49fb-8494-924c19290e9d is in state STARTED 2025-08-29 19:40:45.144014 | orchestrator | 2025-08-29 19:40:45 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:40:45.145931 | orchestrator | 2025-08-29 19:40:45 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state STARTED 2025-08-29 19:40:45.146190 | orchestrator | 2025-08-29 19:40:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:40:48.199305 | orchestrator | 2025-08-29 19:40:48 | INFO  | Task f515d655-68c7-44f6-989d-51c52e8540b1 is in state STARTED 2025-08-29 19:40:48.200315 | orchestrator | 2025-08-29 19:40:48 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:40:48.202791 | orchestrator | 2025-08-29 19:40:48 | INFO  | Task 7f1a7a42-06da-49fb-8494-924c19290e9d is in state STARTED 2025-08-29 19:40:48.204501 | orchestrator | 2025-08-29 19:40:48 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:40:48.208313 | orchestrator | 2025-08-29 19:40:48 | INFO  | Task 2d4d2a53-5836-498d-9be6-8ce5fcda9816 is in state SUCCESS 2025-08-29 19:40:48.209389 | orchestrator | 2025-08-29 19:40:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:40:48.210863 | orchestrator | 2025-08-29 19:40:48.210904 | orchestrator | 2025-08-29 19:40:48.210916 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-08-29 19:40:48.210927 | orchestrator | 2025-08-29 19:40:48.210939 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-08-29 19:40:48.210950 | orchestrator | Friday 29 August 2025 19:39:39 +0000 (0:00:00.204) 0:00:00.204 ********* 2025-08-29 19:40:48.210961 | orchestrator | changed: [localhost] 2025-08-29 19:40:48.210973 | orchestrator | 2025-08-29 19:40:48.210984 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-08-29 19:40:48.210995 | orchestrator | Friday 29 August 2025 19:39:41 +0000 (0:00:01.501) 0:00:01.705 ********* 2025-08-29 19:40:48.211006 | orchestrator | changed: [localhost] 2025-08-29 19:40:48.211017 | orchestrator | 2025-08-29 19:40:48.211028 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-08-29 19:40:48.211038 | orchestrator | Friday 29 August 2025 19:40:17 +0000 (0:00:35.827) 0:00:37.535 ********* 2025-08-29 19:40:48.211089 | orchestrator | changed: [localhost] 2025-08-29 19:40:48.211110 | orchestrator | 2025-08-29 19:40:48.211206 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 19:40:48.211227 | orchestrator | 2025-08-29 19:40:48.211247 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 19:40:48.211267 | orchestrator | Friday 29 August 2025 19:40:22 +0000 (0:00:05.003) 0:00:42.539 ********* 2025-08-29 19:40:48.211287 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:40:48.211711 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:40:48.211728 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:40:48.211740 | orchestrator | 2025-08-29 19:40:48.211753 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 19:40:48.211764 | orchestrator | Friday 29 August 2025 19:40:22 +0000 (0:00:00.503) 0:00:43.042 ********* 2025-08-29 19:40:48.211775 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-08-29 19:40:48.211786 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-08-29 19:40:48.211797 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-08-29 19:40:48.211808 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-08-29 19:40:48.211819 | orchestrator | 2025-08-29 19:40:48.211829 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-08-29 19:40:48.211840 | orchestrator | skipping: no hosts matched 2025-08-29 19:40:48.211852 | orchestrator | 2025-08-29 19:40:48.211896 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:40:48.211908 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:40:48.211922 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:40:48.211935 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:40:48.211986 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:40:48.211999 | orchestrator | 2025-08-29 19:40:48.212010 | orchestrator | 2025-08-29 19:40:48.212021 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:40:48.212032 | orchestrator | Friday 29 August 2025 19:40:23 +0000 (0:00:00.510) 0:00:43.553 ********* 2025-08-29 19:40:48.212044 | orchestrator | =============================================================================== 2025-08-29 19:40:48.212054 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 35.83s 2025-08-29 19:40:48.212065 | orchestrator | Download ironic-agent kernel -------------------------------------------- 5.00s 2025-08-29 19:40:48.212076 | orchestrator | Ensure the destination directory exists --------------------------------- 1.50s 2025-08-29 19:40:48.212087 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.51s 2025-08-29 19:40:48.212098 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.50s 2025-08-29 19:40:48.212109 | orchestrator | 2025-08-29 19:40:48.212120 | orchestrator | 2025-08-29 19:40:48.212130 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 19:40:48.212141 | orchestrator | 2025-08-29 19:40:48.212193 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 19:40:48.212205 | orchestrator | Friday 29 August 2025 19:37:43 +0000 (0:00:00.246) 0:00:00.247 ********* 2025-08-29 19:40:48.212319 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:40:48.212339 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:40:48.212356 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:40:48.212374 | orchestrator | 2025-08-29 19:40:48.212393 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 19:40:48.212412 | orchestrator | Friday 29 August 2025 19:37:43 +0000 (0:00:00.268) 0:00:00.515 ********* 2025-08-29 19:40:48.212432 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-08-29 19:40:48.212480 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-08-29 19:40:48.212497 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-08-29 19:40:48.212508 | orchestrator | 2025-08-29 19:40:48.212519 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-08-29 19:40:48.212530 | orchestrator | 2025-08-29 19:40:48.212541 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-08-29 19:40:48.212552 | orchestrator | Friday 29 August 2025 19:37:44 +0000 (0:00:00.378) 0:00:00.893 ********* 2025-08-29 19:40:48.212563 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:40:48.212575 | orchestrator | 2025-08-29 19:40:48.212585 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-08-29 19:40:48.212596 | orchestrator | Friday 29 August 2025 19:37:44 +0000 (0:00:00.459) 0:00:01.353 ********* 2025-08-29 19:40:48.212623 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-08-29 19:40:48.212636 | orchestrator | 2025-08-29 19:40:48.212647 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-08-29 19:40:48.212658 | orchestrator | Friday 29 August 2025 19:37:48 +0000 (0:00:03.779) 0:00:05.132 ********* 2025-08-29 19:40:48.212669 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-08-29 19:40:48.212680 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-08-29 19:40:48.212691 | orchestrator | 2025-08-29 19:40:48.212702 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-08-29 19:40:48.212713 | orchestrator | Friday 29 August 2025 19:37:54 +0000 (0:00:05.885) 0:00:11.018 ********* 2025-08-29 19:40:48.212723 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 19:40:48.212734 | orchestrator | 2025-08-29 19:40:48.212745 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-08-29 19:40:48.212767 | orchestrator | Friday 29 August 2025 19:37:57 +0000 (0:00:03.085) 0:00:14.103 ********* 2025-08-29 19:40:48.212778 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 19:40:48.212789 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-08-29 19:40:48.212800 | orchestrator | 2025-08-29 19:40:48.212819 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-08-29 19:40:48.212837 | orchestrator | Friday 29 August 2025 19:38:01 +0000 (0:00:03.863) 0:00:17.966 ********* 2025-08-29 19:40:48.212854 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 19:40:48.212879 | orchestrator | 2025-08-29 19:40:48.212901 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-08-29 19:40:48.212919 | orchestrator | Friday 29 August 2025 19:38:04 +0000 (0:00:03.642) 0:00:21.608 ********* 2025-08-29 19:40:48.212938 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-08-29 19:40:48.212956 | orchestrator | 2025-08-29 19:40:48.212975 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-08-29 19:40:48.212993 | orchestrator | Friday 29 August 2025 19:38:09 +0000 (0:00:04.471) 0:00:26.080 ********* 2025-08-29 19:40:48.213030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 19:40:48.213060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 19:40:48.213083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 19:40:48.213118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.213151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 19:40:48.213165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.213184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 19:40:48.213196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 19:40:48.213208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.213229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.213248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.213260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.213271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.213288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.213300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.213311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.213328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.213347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.213359 | orchestrator | 2025-08-29 19:40:48.213370 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-08-29 19:40:48.213381 | orchestrator | Friday 29 August 2025 19:38:13 +0000 (0:00:03.980) 0:00:30.060 ********* 2025-08-29 19:40:48.213393 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:40:48.213404 | orchestrator | 2025-08-29 19:40:48.213415 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-08-29 19:40:48.213425 | orchestrator | Friday 29 August 2025 19:38:13 +0000 (0:00:00.114) 0:00:30.175 ********* 2025-08-29 19:40:48.213436 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:40:48.213474 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:40:48.213486 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:40:48.213497 | orchestrator | 2025-08-29 19:40:48.213508 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-08-29 19:40:48.213519 | orchestrator | Friday 29 August 2025 19:38:13 +0000 (0:00:00.301) 0:00:30.476 ********* 2025-08-29 19:40:48.213531 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:40:48.213542 | orchestrator | 2025-08-29 19:40:48.213553 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-08-29 19:40:48.213563 | orchestrator | Friday 29 August 2025 19:38:14 +0000 (0:00:00.592) 0:00:31.068 ********* 2025-08-29 19:40:48.213580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 19:40:48.213594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 19:40:48.213620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 19:40:48.213632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 19:40:48.213644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 19:40:48.213660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 19:40:48.213672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.213683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.213701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.213719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.213731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.213742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.213758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.213770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.213781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.213808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.213827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.213838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.213849 | orchestrator | 2025-08-29 19:40:48.213860 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-08-29 19:40:48.213871 | orchestrator | Friday 29 August 2025 19:38:20 +0000 (0:00:06.047) 0:00:37.115 ********* 2025-08-29 19:40:48.213883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 19:40:48.213895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 19:40:48.213913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.213925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.214738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.214782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.214930 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:40:48.214966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 19:40:48.214984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 19:40:48.215010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.215022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.215049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.215070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.215089 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:40:48.215108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 19:40:48.215133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 19:40:48.215165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.215184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.215211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.215224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.215235 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:40:48.215247 | orchestrator | 2025-08-29 19:40:48.215258 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-08-29 19:40:48.215270 | orchestrator | Friday 29 August 2025 19:38:22 +0000 (0:00:02.484) 0:00:39.599 ********* 2025-08-29 19:40:48.215282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 19:40:48.215299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 19:40:48.215322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.215335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.215353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.215366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.215378 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:40:48.215391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 19:40:48.215409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 19:40:48.215430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.215521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.215551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.215571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.215591 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:40:48.215613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 19:40:48.215642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 19:40:48.215671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.215683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.215694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.215713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.215725 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:40:48.215736 | orchestrator | 2025-08-29 19:40:48.215748 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-08-29 19:40:48.215759 | orchestrator | Friday 29 August 2025 19:38:25 +0000 (0:00:02.212) 0:00:41.812 ********* 2025-08-29 19:40:48.215770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 19:40:48.215794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 19:40:48.215806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 19:40:48.215841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 19:40:48.215858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 19:40:48.215869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 19:40:48.215879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.215904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.215915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.215925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.215935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.215951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.215962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.215978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.215993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.216004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.216014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.216024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.216034 | orchestrator | 2025-08-29 19:40:48.216048 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-08-29 19:40:48.216059 | orchestrator | Friday 29 August 2025 19:38:31 +0000 (0:00:06.535) 0:00:48.347 ********* 2025-08-29 19:40:48.216069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 19:40:48.216085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 19:40:48.216100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 19:40:48.216110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 19:40:48.216120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 19:40:48.216136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 19:40:48.216152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.216163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.216177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.216188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.216198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.216213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.216223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.216239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.216249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.216264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.216274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.216284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.216295 | orchestrator | 2025-08-29 19:40:48.216305 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-08-29 19:40:48.216315 | orchestrator | Friday 29 August 2025 19:38:55 +0000 (0:00:23.485) 0:01:11.833 ********* 2025-08-29 19:40:48.216325 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-08-29 19:40:48.216335 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-08-29 19:40:48.216345 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-08-29 19:40:48.216355 | orchestrator | 2025-08-29 19:40:48.216370 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-08-29 19:40:48.216385 | orchestrator | Friday 29 August 2025 19:39:00 +0000 (0:00:05.666) 0:01:17.500 ********* 2025-08-29 19:40:48.216395 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-08-29 19:40:48.216405 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-08-29 19:40:48.216414 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-08-29 19:40:48.216424 | orchestrator | 2025-08-29 19:40:48.216433 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-08-29 19:40:48.216468 | orchestrator | Friday 29 August 2025 19:39:04 +0000 (0:00:03.845) 0:01:21.346 ********* 2025-08-29 19:40:48.216486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 19:40:48.216512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 19:40:48.216532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 19:40:48.216550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 19:40:48.216595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.216619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.216642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.216658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 19:40:48.216684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.216701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.216719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.216747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 19:40:48.216758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.216771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.216795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.216812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.216835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.216870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.216887 | orchestrator | 2025-08-29 19:40:48.216911 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-08-29 19:40:48.216928 | orchestrator | Friday 29 August 2025 19:39:08 +0000 (0:00:03.485) 0:01:24.831 ********* 2025-08-29 19:40:48.216946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 19:40:48.216964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 19:40:48.216988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 19:40:48.217002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 19:40:48.217021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.217039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.217049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.217059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 19:40:48.217074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.217085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.217095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.217178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 19:40:48.217190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.217200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.217211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.217226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.217237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.217254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.217264 | orchestrator | 2025-08-29 19:40:48.217274 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-08-29 19:40:48.217284 | orchestrator | Friday 29 August 2025 19:39:11 +0000 (0:00:02.970) 0:01:27.801 ********* 2025-08-29 19:40:48.217294 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:40:48.217304 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:40:48.217313 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:40:48.217323 | orchestrator | 2025-08-29 19:40:48.217332 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-08-29 19:40:48.217342 | orchestrator | Friday 29 August 2025 19:39:11 +0000 (0:00:00.442) 0:01:28.244 ********* 2025-08-29 19:40:48.217358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 19:40:48.217369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 19:40:48.217379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.217395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.217412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.217422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.217433 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:40:48.217478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 19:40:48.217489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 19:40:48.217500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.217514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.217531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.217542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.217552 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:40:48.217567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 19:40:48.217578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 19:40:48.217588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.217602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.217619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.217629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 19:40:48.217639 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:40:48.217649 | orchestrator | 2025-08-29 19:40:48.217659 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-08-29 19:40:48.217669 | orchestrator | Friday 29 August 2025 19:39:13 +0000 (0:00:01.932) 0:01:30.177 ********* 2025-08-29 19:40:48.217684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 19:40:48.217695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 19:40:48.217705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 19:40:48.217727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 19:40:48.217737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 19:40:48.217754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 19:40:48.217765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.217775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.217785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.217809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.217820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.217830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.217845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.217855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.217865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.217875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.217896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.217907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 19:40:48.217917 | orchestrator | 2025-08-29 19:40:48.217927 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-08-29 19:40:48.217936 | orchestrator | Friday 29 August 2025 19:39:18 +0000 (0:00:04.995) 0:01:35.172 ********* 2025-08-29 19:40:48.217947 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:40:48.217965 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:40:48.217980 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:40:48.217996 | orchestrator | 2025-08-29 19:40:48.218012 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-08-29 19:40:48.218092 | orchestrator | Friday 29 August 2025 19:39:18 +0000 (0:00:00.454) 0:01:35.627 ********* 2025-08-29 19:40:48.218107 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-08-29 19:40:48.218117 | orchestrator | 2025-08-29 19:40:48.218127 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-08-29 19:40:48.218136 | orchestrator | Friday 29 August 2025 19:39:20 +0000 (0:00:01.953) 0:01:37.580 ********* 2025-08-29 19:40:48.218146 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 19:40:48.218156 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-08-29 19:40:48.218165 | orchestrator | 2025-08-29 19:40:48.218175 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-08-29 19:40:48.218185 | orchestrator | Friday 29 August 2025 19:39:23 +0000 (0:00:02.370) 0:01:39.950 ********* 2025-08-29 19:40:48.218194 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:40:48.218204 | orchestrator | 2025-08-29 19:40:48.218214 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-08-29 19:40:48.218231 | orchestrator | Friday 29 August 2025 19:39:37 +0000 (0:00:14.616) 0:01:54.567 ********* 2025-08-29 19:40:48.218241 | orchestrator | 2025-08-29 19:40:48.218251 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-08-29 19:40:48.218260 | orchestrator | Friday 29 August 2025 19:39:38 +0000 (0:00:00.358) 0:01:54.926 ********* 2025-08-29 19:40:48.218270 | orchestrator | 2025-08-29 19:40:48.218279 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-08-29 19:40:48.218288 | orchestrator | Friday 29 August 2025 19:39:38 +0000 (0:00:00.150) 0:01:55.076 ********* 2025-08-29 19:40:48.218298 | orchestrator | 2025-08-29 19:40:48.218307 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-08-29 19:40:48.218326 | orchestrator | Friday 29 August 2025 19:39:38 +0000 (0:00:00.174) 0:01:55.251 ********* 2025-08-29 19:40:48.218335 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:40:48.218345 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:40:48.218354 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:40:48.218364 | orchestrator | 2025-08-29 19:40:48.218374 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-08-29 19:40:48.218383 | orchestrator | Friday 29 August 2025 19:39:49 +0000 (0:00:10.993) 0:02:06.245 ********* 2025-08-29 19:40:48.218393 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:40:48.218402 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:40:48.218411 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:40:48.218421 | orchestrator | 2025-08-29 19:40:48.218430 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-08-29 19:40:48.218459 | orchestrator | Friday 29 August 2025 19:39:56 +0000 (0:00:07.385) 0:02:13.630 ********* 2025-08-29 19:40:48.218470 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:40:48.218480 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:40:48.218490 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:40:48.218499 | orchestrator | 2025-08-29 19:40:48.218509 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-08-29 19:40:48.218519 | orchestrator | Friday 29 August 2025 19:40:09 +0000 (0:00:12.164) 0:02:25.795 ********* 2025-08-29 19:40:48.218530 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:40:48.218546 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:40:48.218563 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:40:48.218577 | orchestrator | 2025-08-29 19:40:48.218595 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-08-29 19:40:48.218611 | orchestrator | Friday 29 August 2025 19:40:20 +0000 (0:00:11.414) 0:02:37.210 ********* 2025-08-29 19:40:48.218627 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:40:48.218639 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:40:48.218648 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:40:48.218658 | orchestrator | 2025-08-29 19:40:48.218668 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-08-29 19:40:48.218677 | orchestrator | Friday 29 August 2025 19:40:31 +0000 (0:00:10.935) 0:02:48.146 ********* 2025-08-29 19:40:48.218687 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:40:48.218696 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:40:48.218706 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:40:48.218716 | orchestrator | 2025-08-29 19:40:48.218731 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-08-29 19:40:48.218754 | orchestrator | Friday 29 August 2025 19:40:38 +0000 (0:00:06.584) 0:02:54.731 ********* 2025-08-29 19:40:48.218770 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:40:48.218784 | orchestrator | 2025-08-29 19:40:48.218798 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:40:48.218813 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 19:40:48.218830 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 19:40:48.218845 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 19:40:48.218861 | orchestrator | 2025-08-29 19:40:48.218875 | orchestrator | 2025-08-29 19:40:48.218891 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:40:48.218905 | orchestrator | Friday 29 August 2025 19:40:44 +0000 (0:00:06.863) 0:03:01.594 ********* 2025-08-29 19:40:48.218920 | orchestrator | =============================================================================== 2025-08-29 19:40:48.218935 | orchestrator | designate : Copying over designate.conf -------------------------------- 23.49s 2025-08-29 19:40:48.218964 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.62s 2025-08-29 19:40:48.218980 | orchestrator | designate : Restart designate-central container ------------------------ 12.16s 2025-08-29 19:40:48.218995 | orchestrator | designate : Restart designate-producer container ----------------------- 11.41s 2025-08-29 19:40:48.219005 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 10.99s 2025-08-29 19:40:48.219014 | orchestrator | designate : Restart designate-mdns container --------------------------- 10.94s 2025-08-29 19:40:48.219024 | orchestrator | designate : Restart designate-api container ----------------------------- 7.39s 2025-08-29 19:40:48.219033 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 6.86s 2025-08-29 19:40:48.219043 | orchestrator | designate : Restart designate-worker container -------------------------- 6.58s 2025-08-29 19:40:48.219053 | orchestrator | designate : Copying over config.json files for services ----------------- 6.54s 2025-08-29 19:40:48.219062 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.05s 2025-08-29 19:40:48.219072 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 5.89s 2025-08-29 19:40:48.219089 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 5.67s 2025-08-29 19:40:48.219114 | orchestrator | designate : Check designate containers ---------------------------------- 5.00s 2025-08-29 19:40:48.219130 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.47s 2025-08-29 19:40:48.219146 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.98s 2025-08-29 19:40:48.219161 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.86s 2025-08-29 19:40:48.219176 | orchestrator | designate : Copying over named.conf ------------------------------------- 3.85s 2025-08-29 19:40:48.219190 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.78s 2025-08-29 19:40:48.219204 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.64s 2025-08-29 19:40:51.249761 | orchestrator | 2025-08-29 19:40:51 | INFO  | Task f515d655-68c7-44f6-989d-51c52e8540b1 is in state STARTED 2025-08-29 19:40:51.251481 | orchestrator | 2025-08-29 19:40:51 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:40:51.253177 | orchestrator | 2025-08-29 19:40:51 | INFO  | Task 7f1a7a42-06da-49fb-8494-924c19290e9d is in state STARTED 2025-08-29 19:40:51.255161 | orchestrator | 2025-08-29 19:40:51 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:40:51.255219 | orchestrator | 2025-08-29 19:40:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:40:54.298170 | orchestrator | 2025-08-29 19:40:54 | INFO  | Task f515d655-68c7-44f6-989d-51c52e8540b1 is in state STARTED 2025-08-29 19:40:54.299902 | orchestrator | 2025-08-29 19:40:54 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:40:54.301373 | orchestrator | 2025-08-29 19:40:54 | INFO  | Task 7f1a7a42-06da-49fb-8494-924c19290e9d is in state STARTED 2025-08-29 19:40:54.303099 | orchestrator | 2025-08-29 19:40:54 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:40:54.304027 | orchestrator | 2025-08-29 19:40:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:40:57.352746 | orchestrator | 2025-08-29 19:40:57 | INFO  | Task f515d655-68c7-44f6-989d-51c52e8540b1 is in state STARTED 2025-08-29 19:40:57.355145 | orchestrator | 2025-08-29 19:40:57 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:40:57.357192 | orchestrator | 2025-08-29 19:40:57 | INFO  | Task 7f1a7a42-06da-49fb-8494-924c19290e9d is in state STARTED 2025-08-29 19:40:57.363544 | orchestrator | 2025-08-29 19:40:57 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:40:57.363637 | orchestrator | 2025-08-29 19:40:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:41:00.401681 | orchestrator | 2025-08-29 19:41:00 | INFO  | Task f515d655-68c7-44f6-989d-51c52e8540b1 is in state STARTED 2025-08-29 19:41:00.401815 | orchestrator | 2025-08-29 19:41:00 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state STARTED 2025-08-29 19:41:00.401861 | orchestrator | 2025-08-29 19:41:00 | INFO  | Task 7f1a7a42-06da-49fb-8494-924c19290e9d is in state STARTED 2025-08-29 19:41:00.402699 | orchestrator | 2025-08-29 19:41:00 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:41:00.402791 | orchestrator | 2025-08-29 19:41:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:41:03.428067 | orchestrator | 2025-08-29 19:41:03 | INFO  | Task f515d655-68c7-44f6-989d-51c52e8540b1 is in state STARTED 2025-08-29 19:41:03.430253 | orchestrator | 2025-08-29 19:41:03 | INFO  | Task d1e65660-2066-4589-a329-a4a88b955499 is in state SUCCESS 2025-08-29 19:41:03.431259 | orchestrator | 2025-08-29 19:41:03.433250 | orchestrator | 2025-08-29 19:41:03.433292 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 19:41:03.433305 | orchestrator | 2025-08-29 19:41:03.433315 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 19:41:03.433326 | orchestrator | Friday 29 August 2025 19:37:35 +0000 (0:00:00.343) 0:00:00.343 ********* 2025-08-29 19:41:03.433336 | orchestrator | ok: [testbed-manager] 2025-08-29 19:41:03.433347 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:41:03.433357 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:41:03.433366 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:41:03.433376 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:41:03.433385 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:41:03.433395 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:41:03.433409 | orchestrator | 2025-08-29 19:41:03.433452 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 19:41:03.433479 | orchestrator | Friday 29 August 2025 19:37:37 +0000 (0:00:01.374) 0:00:01.718 ********* 2025-08-29 19:41:03.433496 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-08-29 19:41:03.433512 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-08-29 19:41:03.433528 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-08-29 19:41:03.433542 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-08-29 19:41:03.433556 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-08-29 19:41:03.433572 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-08-29 19:41:03.433588 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-08-29 19:41:03.433603 | orchestrator | 2025-08-29 19:41:03.433620 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-08-29 19:41:03.433637 | orchestrator | 2025-08-29 19:41:03.433653 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-08-29 19:41:03.433670 | orchestrator | Friday 29 August 2025 19:37:38 +0000 (0:00:01.012) 0:00:02.731 ********* 2025-08-29 19:41:03.433688 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:41:03.433707 | orchestrator | 2025-08-29 19:41:03.433723 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-08-29 19:41:03.433739 | orchestrator | Friday 29 August 2025 19:37:40 +0000 (0:00:02.014) 0:00:04.745 ********* 2025-08-29 19:41:03.433753 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 19:41:03.433797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 19:41:03.433821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 19:41:03.433832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 19:41:03.433970 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 19:41:03.433989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:41:03.434002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:41:03.434055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:41:03.434080 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 19:41:03.434092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:41:03.434111 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 19:41:03.434124 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 19:41:03.434216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:41:03.434233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:41:03.434245 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 19:41:03.434257 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 19:41:03.434280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 19:41:03.434297 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 19:41:03.434307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 19:41:03.434385 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 19:41:03.434398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 19:41:03.434409 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:41:03.434453 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 19:41:03.434472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:41:03.434490 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 19:41:03.434513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:41:03.434531 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 19:41:03.434639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:41:03.434655 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 19:41:03.434666 | orchestrator | 2025-08-29 19:41:03.434676 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-08-29 19:41:03.434687 | orchestrator | Friday 29 August 2025 19:37:43 +0000 (0:00:03.527) 0:00:08.273 ********* 2025-08-29 19:41:03.434706 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:41:03.434719 | orchestrator | 2025-08-29 19:41:03.434736 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-08-29 19:41:03.434758 | orchestrator | Friday 29 August 2025 19:37:45 +0000 (0:00:01.523) 0:00:09.796 ********* 2025-08-29 19:41:03.434779 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 19:41:03.434796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 19:41:03.434821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 19:41:03.434839 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 19:41:03.434982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 19:41:03.435003 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 19:41:03.435013 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 19:41:03.435033 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 19:41:03.435043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:41:03.435054 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 19:41:03.435071 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 19:41:03.435082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:41:03.435161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:41:03.435175 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 19:41:03.435193 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 19:41:03.435203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:41:03.435215 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 19:41:03.435232 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 19:41:03.435243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:41:03.435282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:41:03.435294 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 19:41:03.435317 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 19:41:03.435327 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:41:03.435338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 19:41:03.435348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 19:41:03.435363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 19:41:03.435373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:41:03.435413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:41:03.435497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:41:03.435516 | orchestrator | 2025-08-29 19:41:03.435533 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-08-29 19:41:03.435551 | orchestrator | Friday 29 August 2025 19:37:50 +0000 (0:00:05.846) 0:00:15.643 ********* 2025-08-29 19:41:03.435568 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-08-29 19:41:03.435585 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 19:41:03.435600 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 19:41:03.435617 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-08-29 19:41:03.435667 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:41:03.435687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 19:41:03.435697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:41:03.435707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:41:03.435717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 19:41:03.435727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:41:03.435742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 19:41:03.435753 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:41:03.435764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:41:03.435803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:41:03.435822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 19:41:03.435835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:41:03.435846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 19:41:03.435857 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:41:03.435869 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:41:03.435881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:41:03.435893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:41:03.435909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 19:41:03.435921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:41:03.435967 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 19:41:03.435980 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 19:41:03.435990 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 19:41:03.436000 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:41:03.436010 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:41:03.436020 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 19:41:03.436031 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 19:41:03.436041 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 19:41:03.436051 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:41:03.436067 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 19:41:03.436083 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 19:41:03.436120 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 19:41:03.436132 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:41:03.436142 | orchestrator | 2025-08-29 19:41:03.436153 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-08-29 19:41:03.436163 | orchestrator | Friday 29 August 2025 19:37:52 +0000 (0:00:01.625) 0:00:17.268 ********* 2025-08-29 19:41:03.436173 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-08-29 19:41:03.436184 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 19:41:03.436194 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 19:41:03.436219 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-08-29 19:41:03.436258 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:41:03.436276 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:41:03.436345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 19:41:03.436371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:41:03.436389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:41:03.436407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 19:41:03.436420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:41:03.436455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 19:41:03.436475 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:41:03.436492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:41:03.436502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:41:03.436548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 19:41:03.436560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:41:03.436571 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:41:03.436581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 19:41:03.436591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:41:03.436601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:41:03.436621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 19:41:03.436636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 19:41:03.436647 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:41:03.436683 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 19:41:03.436695 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 19:41:03.436705 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 19:41:03.436716 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:41:03.436725 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 19:41:03.436736 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 19:41:03.436752 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 19:41:03.436762 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:41:03.436777 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 19:41:03.436787 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 19:41:03.436834 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 19:41:03.436853 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:41:03.436868 | orchestrator | 2025-08-29 19:41:03.436885 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-08-29 19:41:03.436901 | orchestrator | Friday 29 August 2025 19:37:54 +0000 (0:00:01.974) 0:00:19.243 ********* 2025-08-29 19:41:03.436917 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 19:41:03.436934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 19:41:03.436950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 19:41:03.436978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 19:41:03.437003 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 19:41:03.437019 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 19:41:03.437083 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 19:41:03.437104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:41:03.437122 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 19:41:03.437141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:41:03.437168 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 19:41:03.437182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:41:03.437198 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 19:41:03.437208 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 19:41:03.437263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:41:03.437284 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 19:41:03.437301 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 19:41:03.437317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:41:03.437345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:41:03.437363 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 19:41:03.437389 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 19:41:03.437491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 19:41:03.437514 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 19:41:03.437530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 19:41:03.437558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 19:41:03.437572 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:41:03.437588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:41:03.437611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:41:03.437628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:41:03.437644 | orchestrator | 2025-08-29 19:41:03.437659 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-08-29 19:41:03.437675 | orchestrator | Friday 29 August 2025 19:38:00 +0000 (0:00:06.109) 0:00:25.353 ********* 2025-08-29 19:41:03.437691 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 19:41:03.437710 | orchestrator | 2025-08-29 19:41:03.437724 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-08-29 19:41:03.437785 | orchestrator | Friday 29 August 2025 19:38:01 +0000 (0:00:01.020) 0:00:26.373 ********* 2025-08-29 19:41:03.437803 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1327284, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7509851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.437822 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1327284, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7509851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.437850 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1327284, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7509851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.437867 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1327284, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7509851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.437884 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1327301, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7576098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.437908 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1327284, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7509851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 19:41:03.437970 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1327301, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7576098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.437991 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1327284, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7509851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438064 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1327301, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7576098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438087 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1327284, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7509851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438104 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1327280, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7486098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438121 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1327301, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7576098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438152 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1327301, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7576098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438205 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1327280, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7486098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438217 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1327280, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7486098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438236 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1327301, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7576098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438246 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1327293, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.75361, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438256 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1327280, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7486098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438266 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1327280, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7486098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438281 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1327280, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7486098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438320 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1327293, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.75361, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438334 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1327293, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.75361, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438363 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1327293, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.75361, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438381 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1327276, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7469985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438398 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1327276, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7469985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438416 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1327293, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.75361, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438465 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1327293, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.75361, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438485 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1327276, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7469985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438542 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1327276, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7469985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438567 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1327286, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7509851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438578 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1327286, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7509851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438588 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1327276, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7469985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438598 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1327286, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7509851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438613 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1327291, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.753259, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438624 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1327301, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7576098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 19:41:03.438668 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1327286, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7509851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438681 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1327276, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7469985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438691 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1327291, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.753259, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438701 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1327287, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.751915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438712 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1327291, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.753259, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438726 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1327286, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7509851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438737 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1327282, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7496097, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438780 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1327287, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.751915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438792 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1327287, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.751915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438802 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327299, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7556098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438812 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1327282, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7496097, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438822 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1327291, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.753259, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438837 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1327291, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.753259, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438847 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1327282, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7496097, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438894 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1327287, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.751915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438907 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1327286, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7509851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438917 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327299, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7556098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438927 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327272, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7451394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438938 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1327282, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7496097, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438952 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1327291, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.753259, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.438970 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1327280, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7486098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 19:41:03.439007 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327272, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7451394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439018 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1327287, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.751915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439028 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327299, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7556098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439038 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1327287, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.751915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439049 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1327326, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7680485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439063 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1327282, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7496097, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439083 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327299, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7556098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439120 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327272, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7451394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439132 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1327326, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7680485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439142 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1327298, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7556098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439153 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327299, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7556098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439163 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1327282, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7496097, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439177 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1327326, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7680485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439194 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327272, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7451394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439233 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1327298, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7556098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439245 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1327298, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7556098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439255 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327278, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.747414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439265 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1327293, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.75361, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 19:41:03.439275 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327278, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.747414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439290 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1327326, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7680485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439306 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327272, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7451394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439322 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327278, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.747414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439333 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327299, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7556098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439343 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1327326, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7680485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439353 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1327274, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7456853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439363 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1327274, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7456853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439383 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1327298, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7556098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439393 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1327298, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7556098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439411 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327272, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7451394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439422 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1327290, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7529457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439451 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1327274, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7456853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439461 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327278, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.747414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439472 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1327290, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7529457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439492 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1327288, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.752276, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439503 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327278, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.747414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439520 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1327326, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7680485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439530 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1327312, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7655823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439541 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:41:03.439551 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1327288, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.752276, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439562 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1327274, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7456853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439572 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1327290, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7529457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439593 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1327276, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7469985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 19:41:03.439611 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1327274, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7456853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439637 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1327298, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7556098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439782 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1327312, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7655823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439820 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:41:03.439831 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1327288, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.752276, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439841 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1327290, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7529457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439860 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1327290, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7529457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439874 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327278, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.747414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439884 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1327288, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.752276, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439905 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1327286, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7509851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 19:41:03.439916 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1327288, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.752276, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439926 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1327312, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7655823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439936 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:41:03.439946 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1327274, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7456853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439962 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1327312, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7655823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439973 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:41:03.439987 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1327312, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7655823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.439997 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:41:03.440007 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1327290, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7529457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.440024 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1327288, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.752276, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.440035 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1327291, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.753259, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 19:41:03.440045 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1327312, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7655823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 19:41:03.440061 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:41:03.440071 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1327287, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.751915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 19:41:03.440081 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1327282, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7496097, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 19:41:03.440095 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327299, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7556098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 19:41:03.440105 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327272, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7451394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 19:41:03.440121 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1327326, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7680485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 19:41:03.440131 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1327298, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7556098, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 19:41:03.440141 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1327278, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.747414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 19:41:03.440157 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1327274, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7456853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 19:41:03.440167 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1327290, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7529457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 19:41:03.440181 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1327288, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.752276, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 19:41:03.440192 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1327312, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7655823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 19:41:03.440201 | orchestrator | 2025-08-29 19:41:03.440212 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-08-29 19:41:03.440222 | orchestrator | Friday 29 August 2025 19:38:29 +0000 (0:00:28.058) 0:00:54.432 ********* 2025-08-29 19:41:03.440233 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 19:41:03.440242 | orchestrator | 2025-08-29 19:41:03.440257 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-08-29 19:41:03.440267 | orchestrator | Friday 29 August 2025 19:38:31 +0000 (0:00:02.102) 0:00:56.535 ********* 2025-08-29 19:41:03.440277 | orchestrator | [WARNING]: Skipped 2025-08-29 19:41:03.440288 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 19:41:03.440297 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-08-29 19:41:03.440307 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 19:41:03.440317 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-08-29 19:41:03.440327 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 19:41:03.440337 | orchestrator | [WARNING]: Skipped 2025-08-29 19:41:03.440346 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 19:41:03.440356 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-08-29 19:41:03.440371 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 19:41:03.440382 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-08-29 19:41:03.440392 | orchestrator | [WARNING]: Skipped 2025-08-29 19:41:03.440401 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 19:41:03.440411 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-08-29 19:41:03.440421 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 19:41:03.440452 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-08-29 19:41:03.440462 | orchestrator | [WARNING]: Skipped 2025-08-29 19:41:03.440472 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 19:41:03.440481 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-08-29 19:41:03.440491 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 19:41:03.440500 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-08-29 19:41:03.440510 | orchestrator | [WARNING]: Skipped 2025-08-29 19:41:03.440520 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 19:41:03.440530 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-08-29 19:41:03.440539 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 19:41:03.440549 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-08-29 19:41:03.440558 | orchestrator | [WARNING]: Skipped 2025-08-29 19:41:03.440568 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 19:41:03.440578 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-08-29 19:41:03.440588 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 19:41:03.440598 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-08-29 19:41:03.440607 | orchestrator | [WARNING]: Skipped 2025-08-29 19:41:03.440617 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 19:41:03.440627 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-08-29 19:41:03.440637 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 19:41:03.440646 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-08-29 19:41:03.440656 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 19:41:03.440666 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-08-29 19:41:03.440676 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-08-29 19:41:03.440686 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 19:41:03.440695 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-08-29 19:41:03.440705 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-08-29 19:41:03.440715 | orchestrator | 2025-08-29 19:41:03.440725 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-08-29 19:41:03.440734 | orchestrator | Friday 29 August 2025 19:38:36 +0000 (0:00:04.643) 0:01:01.178 ********* 2025-08-29 19:41:03.440744 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 19:41:03.440754 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:41:03.440763 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 19:41:03.440773 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:41:03.440787 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 19:41:03.440797 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:41:03.440807 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 19:41:03.440816 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:41:03.440826 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 19:41:03.440841 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:41:03.440851 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 19:41:03.440861 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:41:03.440870 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-08-29 19:41:03.440880 | orchestrator | 2025-08-29 19:41:03.440890 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-08-29 19:41:03.440900 | orchestrator | Friday 29 August 2025 19:39:01 +0000 (0:00:24.610) 0:01:25.789 ********* 2025-08-29 19:41:03.440910 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 19:41:03.440925 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:41:03.440936 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 19:41:03.440946 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 19:41:03.440955 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:41:03.440965 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:41:03.440974 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 19:41:03.440984 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:41:03.440994 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 19:41:03.441003 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:41:03.441013 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 19:41:03.441022 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:41:03.441032 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-08-29 19:41:03.441042 | orchestrator | 2025-08-29 19:41:03.441051 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-08-29 19:41:03.441061 | orchestrator | Friday 29 August 2025 19:39:05 +0000 (0:00:04.487) 0:01:30.277 ********* 2025-08-29 19:41:03.441071 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 19:41:03.441081 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:41:03.441090 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 19:41:03.441101 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 19:41:03.441111 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 19:41:03.441120 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-08-29 19:41:03.441130 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:41:03.441139 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:41:03.441149 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:41:03.441190 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 19:41:03.441200 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:41:03.441210 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 19:41:03.441219 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:41:03.441229 | orchestrator | 2025-08-29 19:41:03.441238 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-08-29 19:41:03.441254 | orchestrator | Friday 29 August 2025 19:39:07 +0000 (0:00:01.950) 0:01:32.228 ********* 2025-08-29 19:41:03.441264 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 19:41:03.441274 | orchestrator | 2025-08-29 19:41:03.441283 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-08-29 19:41:03.441293 | orchestrator | Friday 29 August 2025 19:39:08 +0000 (0:00:01.313) 0:01:33.541 ********* 2025-08-29 19:41:03.441303 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:41:03.441312 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:41:03.441321 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:41:03.441331 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:41:03.441340 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:41:03.441350 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:41:03.441359 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:41:03.441369 | orchestrator | 2025-08-29 19:41:03.441378 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-08-29 19:41:03.441388 | orchestrator | Friday 29 August 2025 19:39:09 +0000 (0:00:00.939) 0:01:34.481 ********* 2025-08-29 19:41:03.441397 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:41:03.441407 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:41:03.441416 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:41:03.441482 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:41:03.441494 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:41:03.441504 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:41:03.441514 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:41:03.441523 | orchestrator | 2025-08-29 19:41:03.441533 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-08-29 19:41:03.441542 | orchestrator | Friday 29 August 2025 19:39:11 +0000 (0:00:02.108) 0:01:36.590 ********* 2025-08-29 19:41:03.441552 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 19:41:03.441561 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 19:41:03.441571 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:41:03.441581 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 19:41:03.441590 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:41:03.441600 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:41:03.441609 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 19:41:03.441619 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:41:03.441634 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 19:41:03.441645 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:41:03.441654 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 19:41:03.441664 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:41:03.441673 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 19:41:03.441683 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:41:03.441692 | orchestrator | 2025-08-29 19:41:03.441700 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-08-29 19:41:03.441708 | orchestrator | Friday 29 August 2025 19:39:14 +0000 (0:00:02.797) 0:01:39.387 ********* 2025-08-29 19:41:03.441716 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 19:41:03.441723 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:41:03.441731 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 19:41:03.441739 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:41:03.441747 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-08-29 19:41:03.441755 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 19:41:03.441772 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:41:03.441780 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 19:41:03.441788 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:41:03.441796 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 19:41:03.441804 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:41:03.441812 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 19:41:03.441820 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:41:03.441828 | orchestrator | 2025-08-29 19:41:03.441836 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-08-29 19:41:03.441844 | orchestrator | Friday 29 August 2025 19:39:17 +0000 (0:00:02.456) 0:01:41.844 ********* 2025-08-29 19:41:03.441852 | orchestrator | [WARNING]: Skipped 2025-08-29 19:41:03.441859 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-08-29 19:41:03.441867 | orchestrator | due to this access issue: 2025-08-29 19:41:03.441875 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-08-29 19:41:03.441883 | orchestrator | not a directory 2025-08-29 19:41:03.441891 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 19:41:03.441899 | orchestrator | 2025-08-29 19:41:03.441906 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-08-29 19:41:03.441914 | orchestrator | Friday 29 August 2025 19:39:18 +0000 (0:00:01.556) 0:01:43.400 ********* 2025-08-29 19:41:03.441922 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:41:03.441930 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:41:03.441937 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:41:03.441945 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:41:03.441953 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:41:03.441961 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:41:03.441969 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:41:03.441976 | orchestrator | 2025-08-29 19:41:03.441984 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-08-29 19:41:03.441992 | orchestrator | Friday 29 August 2025 19:39:19 +0000 (0:00:00.847) 0:01:44.247 ********* 2025-08-29 19:41:03.442000 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:41:03.442008 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:41:03.442065 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:41:03.442073 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:41:03.442082 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:41:03.442090 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:41:03.442098 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:41:03.442105 | orchestrator | 2025-08-29 19:41:03.442113 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-08-29 19:41:03.442122 | orchestrator | Friday 29 August 2025 19:39:20 +0000 (0:00:01.189) 0:01:45.437 ********* 2025-08-29 19:41:03.442135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 19:41:03.442145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 19:41:03.442167 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 19:41:03.442177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 19:41:03.442186 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 19:41:03.442194 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 19:41:03.442203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:41:03.442216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:41:03.442225 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 19:41:03.442243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:41:03.442252 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 19:41:03.442261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:41:03.442269 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 19:41:03.442277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:41:03.442286 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 19:41:03.442298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:41:03.442307 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 19:41:03.442324 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 19:41:03.442333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 19:41:03.442341 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 19:41:03.442350 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 19:41:03.442359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 19:41:03.442367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 19:41:03.442382 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 19:41:03.442412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:41:03.442455 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 19:41:03.442471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:41:03.442484 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:41:03.442497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 19:41:03.442510 | orchestrator | 2025-08-29 19:41:03.442522 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-08-29 19:41:03.442534 | orchestrator | Friday 29 August 2025 19:39:25 +0000 (0:00:04.457) 0:01:49.894 ********* 2025-08-29 19:41:03.442545 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-08-29 19:41:03.442559 | orchestrator | skipping: [testbed-manager] 2025-08-29 19:41:03.442572 | orchestrator | 2025-08-29 19:41:03.442585 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 19:41:03.442598 | orchestrator | Friday 29 August 2025 19:39:26 +0000 (0:00:01.419) 0:01:51.314 ********* 2025-08-29 19:41:03.442611 | orchestrator | 2025-08-29 19:41:03.442623 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 19:41:03.442646 | orchestrator | Friday 29 August 2025 19:39:26 +0000 (0:00:00.060) 0:01:51.375 ********* 2025-08-29 19:41:03.442657 | orchestrator | 2025-08-29 19:41:03.442665 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 19:41:03.442673 | orchestrator | Friday 29 August 2025 19:39:26 +0000 (0:00:00.050) 0:01:51.426 ********* 2025-08-29 19:41:03.442680 | orchestrator | 2025-08-29 19:41:03.442693 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 19:41:03.442701 | orchestrator | Friday 29 August 2025 19:39:26 +0000 (0:00:00.048) 0:01:51.475 ********* 2025-08-29 19:41:03.442709 | orchestrator | 2025-08-29 19:41:03.442717 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 19:41:03.442725 | orchestrator | Friday 29 August 2025 19:39:26 +0000 (0:00:00.155) 0:01:51.630 ********* 2025-08-29 19:41:03.442733 | orchestrator | 2025-08-29 19:41:03.442740 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 19:41:03.442748 | orchestrator | Friday 29 August 2025 19:39:27 +0000 (0:00:00.048) 0:01:51.679 ********* 2025-08-29 19:41:03.442756 | orchestrator | 2025-08-29 19:41:03.442764 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 19:41:03.442772 | orchestrator | Friday 29 August 2025 19:39:27 +0000 (0:00:00.048) 0:01:51.727 ********* 2025-08-29 19:41:03.442780 | orchestrator | 2025-08-29 19:41:03.442788 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-08-29 19:41:03.442796 | orchestrator | Friday 29 August 2025 19:39:27 +0000 (0:00:00.070) 0:01:51.797 ********* 2025-08-29 19:41:03.442804 | orchestrator | changed: [testbed-manager] 2025-08-29 19:41:03.442812 | orchestrator | 2025-08-29 19:41:03.442819 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-08-29 19:41:03.442834 | orchestrator | Friday 29 August 2025 19:39:41 +0000 (0:00:14.102) 0:02:05.900 ********* 2025-08-29 19:41:03.442842 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:41:03.442850 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:41:03.442858 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:41:03.442866 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:41:03.442873 | orchestrator | changed: [testbed-manager] 2025-08-29 19:41:03.442881 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:41:03.442889 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:41:03.442896 | orchestrator | 2025-08-29 19:41:03.442904 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-08-29 19:41:03.442912 | orchestrator | Friday 29 August 2025 19:39:56 +0000 (0:00:15.292) 0:02:21.192 ********* 2025-08-29 19:41:03.442920 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:41:03.442928 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:41:03.442935 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:41:03.442943 | orchestrator | 2025-08-29 19:41:03.442951 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-08-29 19:41:03.442960 | orchestrator | Friday 29 August 2025 19:40:09 +0000 (0:00:12.622) 0:02:33.814 ********* 2025-08-29 19:41:03.442968 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:41:03.442975 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:41:03.442983 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:41:03.442991 | orchestrator | 2025-08-29 19:41:03.442998 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-08-29 19:41:03.443006 | orchestrator | Friday 29 August 2025 19:40:15 +0000 (0:00:06.619) 0:02:40.434 ********* 2025-08-29 19:41:03.443014 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:41:03.443022 | orchestrator | changed: [testbed-manager] 2025-08-29 19:41:03.443030 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:41:03.443037 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:41:03.443045 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:41:03.443053 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:41:03.443061 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:41:03.443074 | orchestrator | 2025-08-29 19:41:03.443082 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-08-29 19:41:03.443090 | orchestrator | Friday 29 August 2025 19:40:32 +0000 (0:00:16.779) 0:02:57.214 ********* 2025-08-29 19:41:03.443098 | orchestrator | changed: [testbed-manager] 2025-08-29 19:41:03.443106 | orchestrator | 2025-08-29 19:41:03.443114 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-08-29 19:41:03.443122 | orchestrator | Friday 29 August 2025 19:40:41 +0000 (0:00:08.946) 0:03:06.161 ********* 2025-08-29 19:41:03.443130 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:41:03.443138 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:41:03.443146 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:41:03.443153 | orchestrator | 2025-08-29 19:41:03.443162 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-08-29 19:41:03.443170 | orchestrator | Friday 29 August 2025 19:40:51 +0000 (0:00:09.914) 0:03:16.075 ********* 2025-08-29 19:41:03.443177 | orchestrator | changed: [testbed-manager] 2025-08-29 19:41:03.443186 | orchestrator | 2025-08-29 19:41:03.443193 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-08-29 19:41:03.443201 | orchestrator | Friday 29 August 2025 19:40:56 +0000 (0:00:04.689) 0:03:20.765 ********* 2025-08-29 19:41:03.443209 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:41:03.443217 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:41:03.443225 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:41:03.443233 | orchestrator | 2025-08-29 19:41:03.443241 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:41:03.443249 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-08-29 19:41:03.443258 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 19:41:03.443266 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 19:41:03.443274 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 19:41:03.443282 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-08-29 19:41:03.443293 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-08-29 19:41:03.443301 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-08-29 19:41:03.443309 | orchestrator | 2025-08-29 19:41:03.443317 | orchestrator | 2025-08-29 19:41:03.443325 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:41:03.443333 | orchestrator | Friday 29 August 2025 19:41:02 +0000 (0:00:06.319) 0:03:27.084 ********* 2025-08-29 19:41:03.443341 | orchestrator | =============================================================================== 2025-08-29 19:41:03.443349 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 28.06s 2025-08-29 19:41:03.443356 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 24.61s 2025-08-29 19:41:03.443364 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 16.78s 2025-08-29 19:41:03.443372 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 15.29s 2025-08-29 19:41:03.443380 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 14.10s 2025-08-29 19:41:03.443392 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 12.62s 2025-08-29 19:41:03.443400 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 9.91s 2025-08-29 19:41:03.443413 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.95s 2025-08-29 19:41:03.443421 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 6.62s 2025-08-29 19:41:03.443444 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 6.32s 2025-08-29 19:41:03.443453 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.11s 2025-08-29 19:41:03.443460 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.85s 2025-08-29 19:41:03.443468 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 4.69s 2025-08-29 19:41:03.443476 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 4.64s 2025-08-29 19:41:03.443484 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.49s 2025-08-29 19:41:03.443492 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.46s 2025-08-29 19:41:03.443499 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.53s 2025-08-29 19:41:03.443507 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.80s 2025-08-29 19:41:03.443515 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 2.46s 2025-08-29 19:41:03.443523 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.11s 2025-08-29 19:41:03.443531 | orchestrator | 2025-08-29 19:41:03 | INFO  | Task 7f1a7a42-06da-49fb-8494-924c19290e9d is in state STARTED 2025-08-29 19:41:03.443539 | orchestrator | 2025-08-29 19:41:03 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:41:03.443547 | orchestrator | 2025-08-29 19:41:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:41:06.489177 | orchestrator | 2025-08-29 19:41:06 | INFO  | Task f515d655-68c7-44f6-989d-51c52e8540b1 is in state STARTED 2025-08-29 19:41:06.489389 | orchestrator | 2025-08-29 19:41:06 | INFO  | Task e844ba59-4d59-4e2c-8d2c-a5e614f9483e is in state STARTED 2025-08-29 19:41:06.489473 | orchestrator | 2025-08-29 19:41:06 | INFO  | Task 7f1a7a42-06da-49fb-8494-924c19290e9d is in state STARTED 2025-08-29 19:41:06.490194 | orchestrator | 2025-08-29 19:41:06 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:41:06.490225 | orchestrator | 2025-08-29 19:41:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:41:09.520102 | orchestrator | 2025-08-29 19:41:09 | INFO  | Task f515d655-68c7-44f6-989d-51c52e8540b1 is in state STARTED 2025-08-29 19:41:09.521464 | orchestrator | 2025-08-29 19:41:09 | INFO  | Task e844ba59-4d59-4e2c-8d2c-a5e614f9483e is in state STARTED 2025-08-29 19:41:09.523595 | orchestrator | 2025-08-29 19:41:09 | INFO  | Task 7f1a7a42-06da-49fb-8494-924c19290e9d is in state STARTED 2025-08-29 19:41:09.525841 | orchestrator | 2025-08-29 19:41:09 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:41:09.525890 | orchestrator | 2025-08-29 19:41:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:41:12.580499 | orchestrator | 2025-08-29 19:41:12 | INFO  | Task f515d655-68c7-44f6-989d-51c52e8540b1 is in state STARTED 2025-08-29 19:41:12.581688 | orchestrator | 2025-08-29 19:41:12 | INFO  | Task e844ba59-4d59-4e2c-8d2c-a5e614f9483e is in state STARTED 2025-08-29 19:41:12.584591 | orchestrator | 2025-08-29 19:41:12 | INFO  | Task 7f1a7a42-06da-49fb-8494-924c19290e9d is in state STARTED 2025-08-29 19:41:12.588471 | orchestrator | 2025-08-29 19:41:12 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:41:12.588548 | orchestrator | 2025-08-29 19:41:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:41:15.646965 | orchestrator | 2025-08-29 19:41:15 | INFO  | Task f515d655-68c7-44f6-989d-51c52e8540b1 is in state STARTED 2025-08-29 19:41:15.650511 | orchestrator | 2025-08-29 19:41:15 | INFO  | Task e844ba59-4d59-4e2c-8d2c-a5e614f9483e is in state STARTED 2025-08-29 19:41:15.651633 | orchestrator | 2025-08-29 19:41:15 | INFO  | Task 7f1a7a42-06da-49fb-8494-924c19290e9d is in state STARTED 2025-08-29 19:41:15.653585 | orchestrator | 2025-08-29 19:41:15 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:41:15.653656 | orchestrator | 2025-08-29 19:41:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:41:18.708524 | orchestrator | 2025-08-29 19:41:18 | INFO  | Task f515d655-68c7-44f6-989d-51c52e8540b1 is in state STARTED 2025-08-29 19:41:18.710820 | orchestrator | 2025-08-29 19:41:18 | INFO  | Task e844ba59-4d59-4e2c-8d2c-a5e614f9483e is in state STARTED 2025-08-29 19:41:18.711905 | orchestrator | 2025-08-29 19:41:18 | INFO  | Task 7f1a7a42-06da-49fb-8494-924c19290e9d is in state STARTED 2025-08-29 19:41:18.714057 | orchestrator | 2025-08-29 19:41:18 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:41:18.714108 | orchestrator | 2025-08-29 19:41:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:41:21.761269 | orchestrator | 2025-08-29 19:41:21 | INFO  | Task f515d655-68c7-44f6-989d-51c52e8540b1 is in state STARTED 2025-08-29 19:41:21.761909 | orchestrator | 2025-08-29 19:41:21 | INFO  | Task e844ba59-4d59-4e2c-8d2c-a5e614f9483e is in state STARTED 2025-08-29 19:41:21.763158 | orchestrator | 2025-08-29 19:41:21 | INFO  | Task 7f1a7a42-06da-49fb-8494-924c19290e9d is in state STARTED 2025-08-29 19:41:21.765070 | orchestrator | 2025-08-29 19:41:21 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:41:21.765133 | orchestrator | 2025-08-29 19:41:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:41:24.813611 | orchestrator | 2025-08-29 19:41:24 | INFO  | Task f515d655-68c7-44f6-989d-51c52e8540b1 is in state STARTED 2025-08-29 19:41:24.814993 | orchestrator | 2025-08-29 19:41:24 | INFO  | Task e844ba59-4d59-4e2c-8d2c-a5e614f9483e is in state STARTED 2025-08-29 19:41:24.816703 | orchestrator | 2025-08-29 19:41:24 | INFO  | Task 7f1a7a42-06da-49fb-8494-924c19290e9d is in state STARTED 2025-08-29 19:41:24.818318 | orchestrator | 2025-08-29 19:41:24 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:41:24.818363 | orchestrator | 2025-08-29 19:41:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:41:27.856278 | orchestrator | 2025-08-29 19:41:27 | INFO  | Task f515d655-68c7-44f6-989d-51c52e8540b1 is in state STARTED 2025-08-29 19:41:27.857507 | orchestrator | 2025-08-29 19:41:27 | INFO  | Task e844ba59-4d59-4e2c-8d2c-a5e614f9483e is in state STARTED 2025-08-29 19:41:27.859541 | orchestrator | 2025-08-29 19:41:27 | INFO  | Task 7f1a7a42-06da-49fb-8494-924c19290e9d is in state STARTED 2025-08-29 19:41:27.861537 | orchestrator | 2025-08-29 19:41:27 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:41:27.861601 | orchestrator | 2025-08-29 19:41:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:41:30.898548 | orchestrator | 2025-08-29 19:41:30 | INFO  | Task f515d655-68c7-44f6-989d-51c52e8540b1 is in state STARTED 2025-08-29 19:41:30.901485 | orchestrator | 2025-08-29 19:41:30 | INFO  | Task e844ba59-4d59-4e2c-8d2c-a5e614f9483e is in state STARTED 2025-08-29 19:41:30.906847 | orchestrator | 2025-08-29 19:41:30 | INFO  | Task 7f1a7a42-06da-49fb-8494-924c19290e9d is in state STARTED 2025-08-29 19:41:30.908288 | orchestrator | 2025-08-29 19:41:30 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:41:30.908377 | orchestrator | 2025-08-29 19:41:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:41:34.006475 | orchestrator | 2025-08-29 19:41:33 | INFO  | Task f515d655-68c7-44f6-989d-51c52e8540b1 is in state STARTED 2025-08-29 19:41:34.006600 | orchestrator | 2025-08-29 19:41:33 | INFO  | Task e844ba59-4d59-4e2c-8d2c-a5e614f9483e is in state STARTED 2025-08-29 19:41:34.006623 | orchestrator | 2025-08-29 19:41:33 | INFO  | Task 7f1a7a42-06da-49fb-8494-924c19290e9d is in state STARTED 2025-08-29 19:41:34.006641 | orchestrator | 2025-08-29 19:41:33 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:41:34.006684 | orchestrator | 2025-08-29 19:41:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:41:36.991431 | orchestrator | 2025-08-29 19:41:36 | INFO  | Task f515d655-68c7-44f6-989d-51c52e8540b1 is in state STARTED 2025-08-29 19:41:36.991976 | orchestrator | 2025-08-29 19:41:36 | INFO  | Task e844ba59-4d59-4e2c-8d2c-a5e614f9483e is in state STARTED 2025-08-29 19:41:36.993564 | orchestrator | 2025-08-29 19:41:36 | INFO  | Task 7f1a7a42-06da-49fb-8494-924c19290e9d is in state STARTED 2025-08-29 19:41:36.994899 | orchestrator | 2025-08-29 19:41:36 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:41:36.995089 | orchestrator | 2025-08-29 19:41:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:41:40.034746 | orchestrator | 2025-08-29 19:41:40 | INFO  | Task f515d655-68c7-44f6-989d-51c52e8540b1 is in state STARTED 2025-08-29 19:41:40.035664 | orchestrator | 2025-08-29 19:41:40 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:41:40.036542 | orchestrator | 2025-08-29 19:41:40 | INFO  | Task e844ba59-4d59-4e2c-8d2c-a5e614f9483e is in state SUCCESS 2025-08-29 19:41:40.037297 | orchestrator | 2025-08-29 19:41:40 | INFO  | Task 7f1a7a42-06da-49fb-8494-924c19290e9d is in state SUCCESS 2025-08-29 19:41:40.039343 | orchestrator | 2025-08-29 19:41:40.039444 | orchestrator | 2025-08-29 19:41:40.039465 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 19:41:40.039479 | orchestrator | 2025-08-29 19:41:40.039493 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 19:41:40.039506 | orchestrator | Friday 29 August 2025 19:41:07 +0000 (0:00:00.276) 0:00:00.276 ********* 2025-08-29 19:41:40.039518 | orchestrator | ok: [testbed-manager] 2025-08-29 19:41:40.039531 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:41:40.039543 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:41:40.039556 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:41:40.039569 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:41:40.039582 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:41:40.039596 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:41:40.039611 | orchestrator | 2025-08-29 19:41:40.039625 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 19:41:40.039639 | orchestrator | Friday 29 August 2025 19:41:07 +0000 (0:00:00.734) 0:00:01.010 ********* 2025-08-29 19:41:40.039653 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-08-29 19:41:40.039668 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-08-29 19:41:40.039677 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-08-29 19:41:40.039685 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-08-29 19:41:40.039693 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-08-29 19:41:40.039701 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-08-29 19:41:40.039709 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-08-29 19:41:40.039717 | orchestrator | 2025-08-29 19:41:40.039725 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-08-29 19:41:40.039758 | orchestrator | 2025-08-29 19:41:40.039766 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-08-29 19:41:40.039775 | orchestrator | Friday 29 August 2025 19:41:08 +0000 (0:00:00.705) 0:00:01.716 ********* 2025-08-29 19:41:40.039784 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:41:40.039793 | orchestrator | 2025-08-29 19:41:40.039801 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-08-29 19:41:40.039809 | orchestrator | Friday 29 August 2025 19:41:10 +0000 (0:00:01.423) 0:00:03.139 ********* 2025-08-29 19:41:40.039817 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2025-08-29 19:41:40.039825 | orchestrator | 2025-08-29 19:41:40.039833 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-08-29 19:41:40.039840 | orchestrator | Friday 29 August 2025 19:41:13 +0000 (0:00:03.288) 0:00:06.428 ********* 2025-08-29 19:41:40.039849 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-08-29 19:41:40.039859 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-08-29 19:41:40.039867 | orchestrator | 2025-08-29 19:41:40.039875 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-08-29 19:41:40.039883 | orchestrator | Friday 29 August 2025 19:41:19 +0000 (0:00:06.136) 0:00:12.564 ********* 2025-08-29 19:41:40.039890 | orchestrator | ok: [testbed-manager] => (item=service) 2025-08-29 19:41:40.039904 | orchestrator | 2025-08-29 19:41:40.039917 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-08-29 19:41:40.039930 | orchestrator | Friday 29 August 2025 19:41:22 +0000 (0:00:03.122) 0:00:15.687 ********* 2025-08-29 19:41:40.039943 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 19:41:40.039956 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2025-08-29 19:41:40.039969 | orchestrator | 2025-08-29 19:41:40.039982 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-08-29 19:41:40.039996 | orchestrator | Friday 29 August 2025 19:41:26 +0000 (0:00:03.920) 0:00:19.608 ********* 2025-08-29 19:41:40.040010 | orchestrator | ok: [testbed-manager] => (item=admin) 2025-08-29 19:41:40.040024 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2025-08-29 19:41:40.040037 | orchestrator | 2025-08-29 19:41:40.040051 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-08-29 19:41:40.040085 | orchestrator | Friday 29 August 2025 19:41:33 +0000 (0:00:06.823) 0:00:26.432 ********* 2025-08-29 19:41:40.040103 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2025-08-29 19:41:40.040115 | orchestrator | 2025-08-29 19:41:40.040128 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:41:40.040142 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:41:40.040157 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:41:40.040170 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:41:40.040180 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:41:40.040188 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:41:40.040211 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:41:40.040229 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:41:40.040237 | orchestrator | 2025-08-29 19:41:40.040245 | orchestrator | 2025-08-29 19:41:40.040254 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:41:40.040261 | orchestrator | Friday 29 August 2025 19:41:38 +0000 (0:00:04.786) 0:00:31.218 ********* 2025-08-29 19:41:40.040269 | orchestrator | =============================================================================== 2025-08-29 19:41:40.040277 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.82s 2025-08-29 19:41:40.040285 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.14s 2025-08-29 19:41:40.040293 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.79s 2025-08-29 19:41:40.040301 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.92s 2025-08-29 19:41:40.040308 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.29s 2025-08-29 19:41:40.040316 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.12s 2025-08-29 19:41:40.040324 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.42s 2025-08-29 19:41:40.040334 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.73s 2025-08-29 19:41:40.040348 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.71s 2025-08-29 19:41:40.040361 | orchestrator | 2025-08-29 19:41:40.040375 | orchestrator | 2025-08-29 19:41:40.040388 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 19:41:40.040595 | orchestrator | 2025-08-29 19:41:40.040609 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 19:41:40.040621 | orchestrator | Friday 29 August 2025 19:40:27 +0000 (0:00:00.282) 0:00:00.282 ********* 2025-08-29 19:41:40.040634 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:41:40.040647 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:41:40.040660 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:41:40.040672 | orchestrator | 2025-08-29 19:41:40.040685 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 19:41:40.040699 | orchestrator | Friday 29 August 2025 19:40:27 +0000 (0:00:00.313) 0:00:00.596 ********* 2025-08-29 19:41:40.040711 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-08-29 19:41:40.040724 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-08-29 19:41:40.040737 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-08-29 19:41:40.040750 | orchestrator | 2025-08-29 19:41:40.040764 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-08-29 19:41:40.040778 | orchestrator | 2025-08-29 19:41:40.040791 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-08-29 19:41:40.040804 | orchestrator | Friday 29 August 2025 19:40:28 +0000 (0:00:00.425) 0:00:01.021 ********* 2025-08-29 19:41:40.040818 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:41:40.040833 | orchestrator | 2025-08-29 19:41:40.040847 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-08-29 19:41:40.040861 | orchestrator | Friday 29 August 2025 19:40:28 +0000 (0:00:00.546) 0:00:01.568 ********* 2025-08-29 19:41:40.040876 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-08-29 19:41:40.040885 | orchestrator | 2025-08-29 19:41:40.040893 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-08-29 19:41:40.040901 | orchestrator | Friday 29 August 2025 19:40:32 +0000 (0:00:03.675) 0:00:05.243 ********* 2025-08-29 19:41:40.040909 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-08-29 19:41:40.040917 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-08-29 19:41:40.040936 | orchestrator | 2025-08-29 19:41:40.040944 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-08-29 19:41:40.040952 | orchestrator | Friday 29 August 2025 19:40:38 +0000 (0:00:06.459) 0:00:11.703 ********* 2025-08-29 19:41:40.040961 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 19:41:40.040969 | orchestrator | 2025-08-29 19:41:40.040984 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-08-29 19:41:40.040993 | orchestrator | Friday 29 August 2025 19:40:42 +0000 (0:00:03.442) 0:00:15.145 ********* 2025-08-29 19:41:40.041001 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 19:41:40.041009 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-08-29 19:41:40.041017 | orchestrator | 2025-08-29 19:41:40.041025 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-08-29 19:41:40.041032 | orchestrator | Friday 29 August 2025 19:40:46 +0000 (0:00:04.193) 0:00:19.339 ********* 2025-08-29 19:41:40.041044 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 19:41:40.041058 | orchestrator | 2025-08-29 19:41:40.041070 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-08-29 19:41:40.041083 | orchestrator | Friday 29 August 2025 19:40:49 +0000 (0:00:03.143) 0:00:22.483 ********* 2025-08-29 19:41:40.041096 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-08-29 19:41:40.041110 | orchestrator | 2025-08-29 19:41:40.041124 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-08-29 19:41:40.041138 | orchestrator | Friday 29 August 2025 19:40:53 +0000 (0:00:04.131) 0:00:26.615 ********* 2025-08-29 19:41:40.041150 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:41:40.041158 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:41:40.041166 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:41:40.041174 | orchestrator | 2025-08-29 19:41:40.041194 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-08-29 19:41:40.041204 | orchestrator | Friday 29 August 2025 19:40:53 +0000 (0:00:00.258) 0:00:26.873 ********* 2025-08-29 19:41:40.041216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 19:41:40.041229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 19:41:40.041239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 19:41:40.041256 | orchestrator | 2025-08-29 19:41:40.041265 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-08-29 19:41:40.041275 | orchestrator | Friday 29 August 2025 19:40:54 +0000 (0:00:00.802) 0:00:27.675 ********* 2025-08-29 19:41:40.041284 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:41:40.041293 | orchestrator | 2025-08-29 19:41:40.041307 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-08-29 19:41:40.041317 | orchestrator | Friday 29 August 2025 19:40:54 +0000 (0:00:00.120) 0:00:27.796 ********* 2025-08-29 19:41:40.041326 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:41:40.041335 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:41:40.041344 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:41:40.041354 | orchestrator | 2025-08-29 19:41:40.041363 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-08-29 19:41:40.041373 | orchestrator | Friday 29 August 2025 19:40:55 +0000 (0:00:00.399) 0:00:28.196 ********* 2025-08-29 19:41:40.041382 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:41:40.041417 | orchestrator | 2025-08-29 19:41:40.041428 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-08-29 19:41:40.041437 | orchestrator | Friday 29 August 2025 19:40:55 +0000 (0:00:00.487) 0:00:28.683 ********* 2025-08-29 19:41:40.041455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 19:41:40.041465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 19:41:40.041482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 19:41:40.041492 | orchestrator | 2025-08-29 19:41:40.041502 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-08-29 19:41:40.041511 | orchestrator | Friday 29 August 2025 19:40:57 +0000 (0:00:01.547) 0:00:30.231 ********* 2025-08-29 19:41:40.041526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 19:41:40.041536 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:41:40.041564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 19:41:40.041586 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:41:40.041599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 19:41:40.041622 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:41:40.041635 | orchestrator | 2025-08-29 19:41:40.041648 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-08-29 19:41:40.041659 | orchestrator | Friday 29 August 2025 19:40:58 +0000 (0:00:00.841) 0:00:31.072 ********* 2025-08-29 19:41:40.041672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 19:41:40.041685 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:41:40.041705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 19:41:40.041719 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:41:40.041742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 19:41:40.041757 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:41:40.041770 | orchestrator | 2025-08-29 19:41:40.041784 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-08-29 19:41:40.041793 | orchestrator | Friday 29 August 2025 19:40:58 +0000 (0:00:00.633) 0:00:31.706 ********* 2025-08-29 19:41:40.041802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 19:41:40.041817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 19:41:40.041825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 19:41:40.041834 | orchestrator | 2025-08-29 19:41:40.041846 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-08-29 19:41:40.041855 | orchestrator | Friday 29 August 2025 19:41:00 +0000 (0:00:01.439) 0:00:33.146 ********* 2025-08-29 19:41:40.041863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 19:41:40.041877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 19:41:40.041898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 19:41:40.041907 | orchestrator | 2025-08-29 19:41:40.041915 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-08-29 19:41:40.041923 | orchestrator | Friday 29 August 2025 19:41:03 +0000 (0:00:03.669) 0:00:36.815 ********* 2025-08-29 19:41:40.041931 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-08-29 19:41:40.041939 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-08-29 19:41:40.041948 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-08-29 19:41:40.042210 | orchestrator | 2025-08-29 19:41:40.042221 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-08-29 19:41:40.042229 | orchestrator | Friday 29 August 2025 19:41:05 +0000 (0:00:01.744) 0:00:38.560 ********* 2025-08-29 19:41:40.042238 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:41:40.042246 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:41:40.042254 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:41:40.042262 | orchestrator | 2025-08-29 19:41:40.042270 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-08-29 19:41:40.042278 | orchestrator | Friday 29 August 2025 19:41:07 +0000 (0:00:01.518) 0:00:40.078 ********* 2025-08-29 19:41:40.042293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 19:41:40.042304 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:41:40.042321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 19:41:40.042339 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:41:40.042348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 19:41:40.042357 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:41:40.042365 | orchestrator | 2025-08-29 19:41:40.042373 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-08-29 19:41:40.042381 | orchestrator | Friday 29 August 2025 19:41:07 +0000 (0:00:00.500) 0:00:40.578 ********* 2025-08-29 19:41:40.042437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 19:41:40.042454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 19:41:40.042470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 19:41:40.042486 | orchestrator | 2025-08-29 19:41:40.042494 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-08-29 19:41:40.042502 | orchestrator | Friday 29 August 2025 19:41:08 +0000 (0:00:01.165) 0:00:41.744 ********* 2025-08-29 19:41:40.042510 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:41:40.042518 | orchestrator | 2025-08-29 19:41:40.042526 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-08-29 19:41:40.042534 | orchestrator | Friday 29 August 2025 19:41:11 +0000 (0:00:02.546) 0:00:44.291 ********* 2025-08-29 19:41:40.042543 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:41:40.042555 | orchestrator | 2025-08-29 19:41:40.042569 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-08-29 19:41:40.042582 | orchestrator | Friday 29 August 2025 19:41:13 +0000 (0:00:02.122) 0:00:46.414 ********* 2025-08-29 19:41:40.042595 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:41:40.042608 | orchestrator | 2025-08-29 19:41:40.042621 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-08-29 19:41:40.042633 | orchestrator | Friday 29 August 2025 19:41:26 +0000 (0:00:12.803) 0:00:59.217 ********* 2025-08-29 19:41:40.042645 | orchestrator | 2025-08-29 19:41:40.042659 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-08-29 19:41:40.042672 | orchestrator | Friday 29 August 2025 19:41:26 +0000 (0:00:00.069) 0:00:59.286 ********* 2025-08-29 19:41:40.042685 | orchestrator | 2025-08-29 19:41:40.042699 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-08-29 19:41:40.042713 | orchestrator | Friday 29 August 2025 19:41:26 +0000 (0:00:00.079) 0:00:59.366 ********* 2025-08-29 19:41:40.042726 | orchestrator | 2025-08-29 19:41:40.042735 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-08-29 19:41:40.042742 | orchestrator | Friday 29 August 2025 19:41:26 +0000 (0:00:00.090) 0:00:59.457 ********* 2025-08-29 19:41:40.042751 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:41:40.042759 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:41:40.042767 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:41:40.042775 | orchestrator | 2025-08-29 19:41:40.042782 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:41:40.042791 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 19:41:40.042800 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 19:41:40.042807 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 19:41:40.042815 | orchestrator | 2025-08-29 19:41:40.042822 | orchestrator | 2025-08-29 19:41:40.042830 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:41:40.042838 | orchestrator | Friday 29 August 2025 19:41:37 +0000 (0:00:10.709) 0:01:10.167 ********* 2025-08-29 19:41:40.042845 | orchestrator | =============================================================================== 2025-08-29 19:41:40.042853 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.80s 2025-08-29 19:41:40.042861 | orchestrator | placement : Restart placement-api container ---------------------------- 10.71s 2025-08-29 19:41:40.042869 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.46s 2025-08-29 19:41:40.042876 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.19s 2025-08-29 19:41:40.042891 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.13s 2025-08-29 19:41:40.042899 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.68s 2025-08-29 19:41:40.042907 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.67s 2025-08-29 19:41:40.042915 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.44s 2025-08-29 19:41:40.042922 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.14s 2025-08-29 19:41:40.042934 | orchestrator | placement : Creating placement databases -------------------------------- 2.55s 2025-08-29 19:41:40.042942 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.12s 2025-08-29 19:41:40.042950 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.74s 2025-08-29 19:41:40.042958 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.55s 2025-08-29 19:41:40.042967 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.52s 2025-08-29 19:41:40.042974 | orchestrator | placement : Copying over config.json files for services ----------------- 1.44s 2025-08-29 19:41:40.042982 | orchestrator | placement : Check placement containers ---------------------------------- 1.17s 2025-08-29 19:41:40.042989 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.84s 2025-08-29 19:41:40.042998 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.80s 2025-08-29 19:41:40.043005 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.63s 2025-08-29 19:41:40.043013 | orchestrator | placement : include_tasks ----------------------------------------------- 0.55s 2025-08-29 19:41:40.043021 | orchestrator | 2025-08-29 19:41:40 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:41:40.043034 | orchestrator | 2025-08-29 19:41:40 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:41:40.043042 | orchestrator | 2025-08-29 19:41:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:41:43.073799 | orchestrator | 2025-08-29 19:41:43 | INFO  | Task f515d655-68c7-44f6-989d-51c52e8540b1 is in state STARTED 2025-08-29 19:41:43.075602 | orchestrator | 2025-08-29 19:41:43 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:41:43.076111 | orchestrator | 2025-08-29 19:41:43 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:41:43.076832 | orchestrator | 2025-08-29 19:41:43 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:41:43.076867 | orchestrator | 2025-08-29 19:41:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:41:46.113151 | orchestrator | 2025-08-29 19:41:46 | INFO  | Task f515d655-68c7-44f6-989d-51c52e8540b1 is in state STARTED 2025-08-29 19:41:46.114294 | orchestrator | 2025-08-29 19:41:46 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:41:46.117852 | orchestrator | 2025-08-29 19:41:46 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:41:46.118166 | orchestrator | 2025-08-29 19:41:46 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:41:46.118188 | orchestrator | 2025-08-29 19:41:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:41:49.166234 | orchestrator | 2025-08-29 19:41:49 | INFO  | Task f515d655-68c7-44f6-989d-51c52e8540b1 is in state STARTED 2025-08-29 19:41:49.166360 | orchestrator | 2025-08-29 19:41:49 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:41:49.168454 | orchestrator | 2025-08-29 19:41:49 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:41:49.171136 | orchestrator | 2025-08-29 19:41:49 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:41:49.171542 | orchestrator | 2025-08-29 19:41:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:41:52.225553 | orchestrator | 2025-08-29 19:41:52 | INFO  | Task f515d655-68c7-44f6-989d-51c52e8540b1 is in state STARTED 2025-08-29 19:41:52.228217 | orchestrator | 2025-08-29 19:41:52 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:41:52.230346 | orchestrator | 2025-08-29 19:41:52 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:41:52.232285 | orchestrator | 2025-08-29 19:41:52 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:41:52.232333 | orchestrator | 2025-08-29 19:41:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:41:55.282634 | orchestrator | 2025-08-29 19:41:55 | INFO  | Task f515d655-68c7-44f6-989d-51c52e8540b1 is in state STARTED 2025-08-29 19:41:55.286683 | orchestrator | 2025-08-29 19:41:55 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:41:55.290702 | orchestrator | 2025-08-29 19:41:55 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:41:55.293658 | orchestrator | 2025-08-29 19:41:55 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:41:55.293740 | orchestrator | 2025-08-29 19:41:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:41:58.340431 | orchestrator | 2025-08-29 19:41:58 | INFO  | Task f515d655-68c7-44f6-989d-51c52e8540b1 is in state STARTED 2025-08-29 19:41:58.341192 | orchestrator | 2025-08-29 19:41:58 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:41:58.343893 | orchestrator | 2025-08-29 19:41:58 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:41:58.346457 | orchestrator | 2025-08-29 19:41:58 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:41:58.346501 | orchestrator | 2025-08-29 19:41:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:42:01.418000 | orchestrator | 2025-08-29 19:42:01 | INFO  | Task f515d655-68c7-44f6-989d-51c52e8540b1 is in state STARTED 2025-08-29 19:42:01.420706 | orchestrator | 2025-08-29 19:42:01 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:42:01.423316 | orchestrator | 2025-08-29 19:42:01 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:42:01.425718 | orchestrator | 2025-08-29 19:42:01 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:42:01.425812 | orchestrator | 2025-08-29 19:42:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:42:04.477875 | orchestrator | 2025-08-29 19:42:04 | INFO  | Task f515d655-68c7-44f6-989d-51c52e8540b1 is in state STARTED 2025-08-29 19:42:04.479524 | orchestrator | 2025-08-29 19:42:04 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:42:04.481324 | orchestrator | 2025-08-29 19:42:04 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:42:04.482495 | orchestrator | 2025-08-29 19:42:04 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:42:04.482528 | orchestrator | 2025-08-29 19:42:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:42:07.535962 | orchestrator | 2025-08-29 19:42:07 | INFO  | Task f515d655-68c7-44f6-989d-51c52e8540b1 is in state STARTED 2025-08-29 19:42:07.537925 | orchestrator | 2025-08-29 19:42:07 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:42:07.540767 | orchestrator | 2025-08-29 19:42:07 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:42:07.543034 | orchestrator | 2025-08-29 19:42:07 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:42:07.543084 | orchestrator | 2025-08-29 19:42:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:42:10.577867 | orchestrator | 2025-08-29 19:42:10 | INFO  | Task f515d655-68c7-44f6-989d-51c52e8540b1 is in state STARTED 2025-08-29 19:42:10.577972 | orchestrator | 2025-08-29 19:42:10 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:42:10.579471 | orchestrator | 2025-08-29 19:42:10 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:42:10.580665 | orchestrator | 2025-08-29 19:42:10 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:42:10.580705 | orchestrator | 2025-08-29 19:42:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:42:13.618928 | orchestrator | 2025-08-29 19:42:13 | INFO  | Task f515d655-68c7-44f6-989d-51c52e8540b1 is in state STARTED 2025-08-29 19:42:13.622238 | orchestrator | 2025-08-29 19:42:13 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:42:13.625203 | orchestrator | 2025-08-29 19:42:13 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:42:13.628252 | orchestrator | 2025-08-29 19:42:13 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:42:13.628317 | orchestrator | 2025-08-29 19:42:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:42:16.694461 | orchestrator | 2025-08-29 19:42:16 | INFO  | Task f515d655-68c7-44f6-989d-51c52e8540b1 is in state STARTED 2025-08-29 19:42:16.694571 | orchestrator | 2025-08-29 19:42:16 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:42:16.695036 | orchestrator | 2025-08-29 19:42:16 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:42:16.695868 | orchestrator | 2025-08-29 19:42:16 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:42:16.695906 | orchestrator | 2025-08-29 19:42:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:42:19.750309 | orchestrator | 2025-08-29 19:42:19 | INFO  | Task f515d655-68c7-44f6-989d-51c52e8540b1 is in state STARTED 2025-08-29 19:42:19.751098 | orchestrator | 2025-08-29 19:42:19 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:42:19.751697 | orchestrator | 2025-08-29 19:42:19 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:42:19.752802 | orchestrator | 2025-08-29 19:42:19 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:42:19.752823 | orchestrator | 2025-08-29 19:42:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:42:22.792437 | orchestrator | 2025-08-29 19:42:22 | INFO  | Task f515d655-68c7-44f6-989d-51c52e8540b1 is in state STARTED 2025-08-29 19:42:22.793462 | orchestrator | 2025-08-29 19:42:22 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:42:22.799299 | orchestrator | 2025-08-29 19:42:22 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:42:22.799976 | orchestrator | 2025-08-29 19:42:22 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:42:22.800922 | orchestrator | 2025-08-29 19:42:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:42:25.829593 | orchestrator | 2025-08-29 19:42:25 | INFO  | Task f515d655-68c7-44f6-989d-51c52e8540b1 is in state STARTED 2025-08-29 19:42:25.832709 | orchestrator | 2025-08-29 19:42:25 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:42:25.836746 | orchestrator | 2025-08-29 19:42:25 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:42:25.840324 | orchestrator | 2025-08-29 19:42:25 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:42:25.840587 | orchestrator | 2025-08-29 19:42:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:42:28.882087 | orchestrator | 2025-08-29 19:42:28 | INFO  | Task f515d655-68c7-44f6-989d-51c52e8540b1 is in state STARTED 2025-08-29 19:42:28.883664 | orchestrator | 2025-08-29 19:42:28 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:42:28.884084 | orchestrator | 2025-08-29 19:42:28 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:42:28.885952 | orchestrator | 2025-08-29 19:42:28 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:42:28.886000 | orchestrator | 2025-08-29 19:42:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:42:31.921594 | orchestrator | 2025-08-29 19:42:31 | INFO  | Task f515d655-68c7-44f6-989d-51c52e8540b1 is in state STARTED 2025-08-29 19:42:31.921936 | orchestrator | 2025-08-29 19:42:31 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:42:31.923589 | orchestrator | 2025-08-29 19:42:31 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:42:31.923652 | orchestrator | 2025-08-29 19:42:31 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:42:31.923676 | orchestrator | 2025-08-29 19:42:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:42:34.964768 | orchestrator | 2025-08-29 19:42:34 | INFO  | Task f515d655-68c7-44f6-989d-51c52e8540b1 is in state STARTED 2025-08-29 19:42:34.967410 | orchestrator | 2025-08-29 19:42:34 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:42:34.969230 | orchestrator | 2025-08-29 19:42:34 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:42:34.970058 | orchestrator | 2025-08-29 19:42:34 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:42:34.970094 | orchestrator | 2025-08-29 19:42:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:42:37.994538 | orchestrator | 2025-08-29 19:42:37 | INFO  | Task f515d655-68c7-44f6-989d-51c52e8540b1 is in state STARTED 2025-08-29 19:42:37.994789 | orchestrator | 2025-08-29 19:42:37 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:42:37.996825 | orchestrator | 2025-08-29 19:42:37 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:42:37.997672 | orchestrator | 2025-08-29 19:42:37 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state STARTED 2025-08-29 19:42:37.997721 | orchestrator | 2025-08-29 19:42:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:42:41.030109 | orchestrator | 2025-08-29 19:42:41 | INFO  | Task f515d655-68c7-44f6-989d-51c52e8540b1 is in state STARTED 2025-08-29 19:42:41.030552 | orchestrator | 2025-08-29 19:42:41 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:42:41.031163 | orchestrator | 2025-08-29 19:42:41 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:42:41.031961 | orchestrator | 2025-08-29 19:42:41 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:42:41.033714 | orchestrator | 2025-08-29 19:42:41 | INFO  | Task 53df4cc6-dce8-4ee1-9b73-a3c62db6298c is in state SUCCESS 2025-08-29 19:42:41.035146 | orchestrator | 2025-08-29 19:42:41.035187 | orchestrator | 2025-08-29 19:42:41.035195 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 19:42:41.035205 | orchestrator | 2025-08-29 19:42:41.035212 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 19:42:41.035221 | orchestrator | Friday 29 August 2025 19:37:42 +0000 (0:00:00.362) 0:00:00.362 ********* 2025-08-29 19:42:41.035229 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:42:41.035237 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:42:41.035245 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:42:41.035782 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:42:41.035802 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:42:41.035810 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:42:41.035818 | orchestrator | 2025-08-29 19:42:41.035826 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 19:42:41.035834 | orchestrator | Friday 29 August 2025 19:37:42 +0000 (0:00:00.829) 0:00:01.192 ********* 2025-08-29 19:42:41.035843 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-08-29 19:42:41.035851 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-08-29 19:42:41.035859 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-08-29 19:42:41.035867 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-08-29 19:42:41.035875 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-08-29 19:42:41.035882 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-08-29 19:42:41.035890 | orchestrator | 2025-08-29 19:42:41.035898 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-08-29 19:42:41.035906 | orchestrator | 2025-08-29 19:42:41.035914 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-08-29 19:42:41.035921 | orchestrator | Friday 29 August 2025 19:37:43 +0000 (0:00:00.580) 0:00:01.772 ********* 2025-08-29 19:42:41.035931 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:42:41.035940 | orchestrator | 2025-08-29 19:42:41.035948 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-08-29 19:42:41.035956 | orchestrator | Friday 29 August 2025 19:37:44 +0000 (0:00:01.101) 0:00:02.874 ********* 2025-08-29 19:42:41.035968 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:42:41.035980 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:42:41.035991 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:42:41.036003 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:42:41.036015 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:42:41.036027 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:42:41.036039 | orchestrator | 2025-08-29 19:42:41.036050 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-08-29 19:42:41.036060 | orchestrator | Friday 29 August 2025 19:37:45 +0000 (0:00:01.182) 0:00:04.056 ********* 2025-08-29 19:42:41.036072 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:42:41.036083 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:42:41.036094 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:42:41.036107 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:42:41.036118 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:42:41.036131 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:42:41.036144 | orchestrator | 2025-08-29 19:42:41.036157 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-08-29 19:42:41.036169 | orchestrator | Friday 29 August 2025 19:37:47 +0000 (0:00:01.456) 0:00:05.512 ********* 2025-08-29 19:42:41.036181 | orchestrator | ok: [testbed-node-0] => { 2025-08-29 19:42:41.036195 | orchestrator |  "changed": false, 2025-08-29 19:42:41.036207 | orchestrator |  "msg": "All assertions passed" 2025-08-29 19:42:41.036245 | orchestrator | } 2025-08-29 19:42:41.036258 | orchestrator | ok: [testbed-node-1] => { 2025-08-29 19:42:41.036270 | orchestrator |  "changed": false, 2025-08-29 19:42:41.036282 | orchestrator |  "msg": "All assertions passed" 2025-08-29 19:42:41.036294 | orchestrator | } 2025-08-29 19:42:41.036306 | orchestrator | ok: [testbed-node-2] => { 2025-08-29 19:42:41.036318 | orchestrator |  "changed": false, 2025-08-29 19:42:41.036356 | orchestrator |  "msg": "All assertions passed" 2025-08-29 19:42:41.036368 | orchestrator | } 2025-08-29 19:42:41.036379 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 19:42:41.036392 | orchestrator |  "changed": false, 2025-08-29 19:42:41.036404 | orchestrator |  "msg": "All assertions passed" 2025-08-29 19:42:41.036416 | orchestrator | } 2025-08-29 19:42:41.036427 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 19:42:41.036440 | orchestrator |  "changed": false, 2025-08-29 19:42:41.036455 | orchestrator |  "msg": "All assertions passed" 2025-08-29 19:42:41.036467 | orchestrator | } 2025-08-29 19:42:41.036481 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 19:42:41.036494 | orchestrator |  "changed": false, 2025-08-29 19:42:41.036507 | orchestrator |  "msg": "All assertions passed" 2025-08-29 19:42:41.036519 | orchestrator | } 2025-08-29 19:42:41.036533 | orchestrator | 2025-08-29 19:42:41.036547 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-08-29 19:42:41.036560 | orchestrator | Friday 29 August 2025 19:37:47 +0000 (0:00:00.619) 0:00:06.132 ********* 2025-08-29 19:42:41.036573 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:42:41.036588 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:42:41.036600 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:42:41.036613 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:42:41.036623 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:42:41.036635 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:42:41.036647 | orchestrator | 2025-08-29 19:42:41.036661 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-08-29 19:42:41.036672 | orchestrator | Friday 29 August 2025 19:37:48 +0000 (0:00:00.544) 0:00:06.676 ********* 2025-08-29 19:42:41.036702 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-08-29 19:42:41.036717 | orchestrator | 2025-08-29 19:42:41.036732 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-08-29 19:42:41.036745 | orchestrator | Friday 29 August 2025 19:37:51 +0000 (0:00:03.391) 0:00:10.068 ********* 2025-08-29 19:42:41.036757 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-08-29 19:42:41.036771 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-08-29 19:42:41.036784 | orchestrator | 2025-08-29 19:42:41.036862 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-08-29 19:42:41.036876 | orchestrator | Friday 29 August 2025 19:37:57 +0000 (0:00:05.857) 0:00:15.926 ********* 2025-08-29 19:42:41.036888 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 19:42:41.036904 | orchestrator | 2025-08-29 19:42:41.036917 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-08-29 19:42:41.036928 | orchestrator | Friday 29 August 2025 19:38:00 +0000 (0:00:03.139) 0:00:19.065 ********* 2025-08-29 19:42:41.036938 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 19:42:41.036948 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-08-29 19:42:41.036959 | orchestrator | 2025-08-29 19:42:41.036970 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-08-29 19:42:41.036983 | orchestrator | Friday 29 August 2025 19:38:04 +0000 (0:00:03.981) 0:00:23.047 ********* 2025-08-29 19:42:41.036994 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 19:42:41.037004 | orchestrator | 2025-08-29 19:42:41.037014 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-08-29 19:42:41.037025 | orchestrator | Friday 29 August 2025 19:38:08 +0000 (0:00:03.659) 0:00:26.707 ********* 2025-08-29 19:42:41.037048 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-08-29 19:42:41.037060 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-08-29 19:42:41.037071 | orchestrator | 2025-08-29 19:42:41.037083 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-08-29 19:42:41.037095 | orchestrator | Friday 29 August 2025 19:38:16 +0000 (0:00:07.637) 0:00:34.344 ********* 2025-08-29 19:42:41.037106 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:42:41.037116 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:42:41.037127 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:42:41.037138 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:42:41.037150 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:42:41.037162 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:42:41.037173 | orchestrator | 2025-08-29 19:42:41.037185 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-08-29 19:42:41.037198 | orchestrator | Friday 29 August 2025 19:38:16 +0000 (0:00:00.772) 0:00:35.117 ********* 2025-08-29 19:42:41.037211 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:42:41.037223 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:42:41.037235 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:42:41.037247 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:42:41.037259 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:42:41.037272 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:42:41.037283 | orchestrator | 2025-08-29 19:42:41.037295 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-08-29 19:42:41.037308 | orchestrator | Friday 29 August 2025 19:38:19 +0000 (0:00:02.299) 0:00:37.417 ********* 2025-08-29 19:42:41.037320 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:42:41.037357 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:42:41.037370 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:42:41.037381 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:42:41.037393 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:42:41.037406 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:42:41.037418 | orchestrator | 2025-08-29 19:42:41.037431 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-08-29 19:42:41.037443 | orchestrator | Friday 29 August 2025 19:38:20 +0000 (0:00:01.127) 0:00:38.544 ********* 2025-08-29 19:42:41.037455 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:42:41.037467 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:42:41.037481 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:42:41.037493 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:42:41.037506 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:42:41.037518 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:42:41.037530 | orchestrator | 2025-08-29 19:42:41.037542 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-08-29 19:42:41.037555 | orchestrator | Friday 29 August 2025 19:38:24 +0000 (0:00:03.694) 0:00:42.239 ********* 2025-08-29 19:42:41.037571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 19:42:41.037661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 19:42:41.037693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 19:42:41.037707 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 19:42:41.037721 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 19:42:41.037733 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 19:42:41.037745 | orchestrator | 2025-08-29 19:42:41.037759 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-08-29 19:42:41.037779 | orchestrator | Friday 29 August 2025 19:38:27 +0000 (0:00:03.370) 0:00:45.610 ********* 2025-08-29 19:42:41.037798 | orchestrator | [WARNING]: Skipped 2025-08-29 19:42:41.037813 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-08-29 19:42:41.037826 | orchestrator | due to this access issue: 2025-08-29 19:42:41.037838 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-08-29 19:42:41.037851 | orchestrator | a directory 2025-08-29 19:42:41.037865 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 19:42:41.037878 | orchestrator | 2025-08-29 19:42:41.037889 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-08-29 19:42:41.037941 | orchestrator | Friday 29 August 2025 19:38:28 +0000 (0:00:00.799) 0:00:46.409 ********* 2025-08-29 19:42:41.037956 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:42:41.037969 | orchestrator | 2025-08-29 19:42:41.037982 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-08-29 19:42:41.037994 | orchestrator | Friday 29 August 2025 19:38:29 +0000 (0:00:01.014) 0:00:47.424 ********* 2025-08-29 19:42:41.038006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 19:42:41.038078 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 19:42:41.038098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 19:42:41.038120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 19:42:41.038184 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 19:42:41.038200 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 19:42:41.038214 | orchestrator | 2025-08-29 19:42:41.038227 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-08-29 19:42:41.038239 | orchestrator | Friday 29 August 2025 19:38:34 +0000 (0:00:04.824) 0:00:52.248 ********* 2025-08-29 19:42:41.038251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 19:42:41.038265 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:42:41.038279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 19:42:41.038301 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:42:41.038386 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 19:42:41.038402 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:42:41.038452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 19:42:41.038465 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:42:41.038477 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 19:42:41.038490 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:42:41.038502 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 19:42:41.038515 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:42:41.038527 | orchestrator | 2025-08-29 19:42:41.038539 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-08-29 19:42:41.038561 | orchestrator | Friday 29 August 2025 19:38:38 +0000 (0:00:04.345) 0:00:56.594 ********* 2025-08-29 19:42:41.038574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 19:42:41.038588 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:42:41.038621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 19:42:41.038635 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:42:41.038646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 19:42:41.038659 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:42:41.038670 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 19:42:41.038683 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:42:41.038695 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 19:42:41.038714 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:42:41.038725 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 19:42:41.038738 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:42:41.038749 | orchestrator | 2025-08-29 19:42:41.038766 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-08-29 19:42:41.038778 | orchestrator | Friday 29 August 2025 19:38:41 +0000 (0:00:03.176) 0:00:59.770 ********* 2025-08-29 19:42:41.038789 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:42:41.038801 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:42:41.038812 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:42:41.038822 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:42:41.038833 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:42:41.038843 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:42:41.038854 | orchestrator | 2025-08-29 19:42:41.038865 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-08-29 19:42:41.038885 | orchestrator | Friday 29 August 2025 19:38:44 +0000 (0:00:02.582) 0:01:02.352 ********* 2025-08-29 19:42:41.038895 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:42:41.038906 | orchestrator | 2025-08-29 19:42:41.038916 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-08-29 19:42:41.038927 | orchestrator | Friday 29 August 2025 19:38:44 +0000 (0:00:00.197) 0:01:02.550 ********* 2025-08-29 19:42:41.038937 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:42:41.038948 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:42:41.038958 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:42:41.038969 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:42:41.038980 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:42:41.038990 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:42:41.039000 | orchestrator | 2025-08-29 19:42:41.039011 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-08-29 19:42:41.039022 | orchestrator | Friday 29 August 2025 19:38:45 +0000 (0:00:00.774) 0:01:03.324 ********* 2025-08-29 19:42:41.039032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 19:42:41.039050 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:42:41.039061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 19:42:41.039073 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:42:41.039084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 19:42:41.039096 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:42:41.039119 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 19:42:41.039131 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:42:41.039142 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 19:42:41.039154 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:42:41.039163 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 19:42:41.039175 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:42:41.039182 | orchestrator | 2025-08-29 19:42:41.039189 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-08-29 19:42:41.039195 | orchestrator | Friday 29 August 2025 19:38:48 +0000 (0:00:03.206) 0:01:06.531 ********* 2025-08-29 19:42:41.039202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 19:42:41.039210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 19:42:41.039227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 19:42:41.039235 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 19:42:41.039247 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 19:42:41.039254 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 19:42:41.039261 | orchestrator | 2025-08-29 19:42:41.039268 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-08-29 19:42:41.039274 | orchestrator | Friday 29 August 2025 19:38:53 +0000 (0:00:04.797) 0:01:11.329 ********* 2025-08-29 19:42:41.039285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 19:42:41.039297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 19:42:41.039305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 19:42:41.039317 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 19:42:41.039349 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 19:42:41.039361 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 19:42:41.039372 | orchestrator | 2025-08-29 19:42:41.039387 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-08-29 19:42:41.039399 | orchestrator | Friday 29 August 2025 19:38:59 +0000 (0:00:06.876) 0:01:18.205 ********* 2025-08-29 19:42:41.039419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 19:42:41.039438 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:42:41.039447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 19:42:41.039454 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:42:41.039461 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 19:42:41.039468 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:42:41.039475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 19:42:41.039482 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:42:41.039493 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 19:42:41.039500 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:42:41.039512 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 19:42:41.039524 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:42:41.039531 | orchestrator | 2025-08-29 19:42:41.039538 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-08-29 19:42:41.039544 | orchestrator | Friday 29 August 2025 19:39:03 +0000 (0:00:03.722) 0:01:21.928 ********* 2025-08-29 19:42:41.039551 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:42:41.039558 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:42:41.039565 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:42:41.039571 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:42:41.039578 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:42:41.039585 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:42:41.039591 | orchestrator | 2025-08-29 19:42:41.039598 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-08-29 19:42:41.039605 | orchestrator | Friday 29 August 2025 19:39:07 +0000 (0:00:03.333) 0:01:25.261 ********* 2025-08-29 19:42:41.039612 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 19:42:41.039619 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:42:41.039626 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 19:42:41.039633 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:42:41.039640 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 19:42:41.039658 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:42:41.039673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 19:42:41.039681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 19:42:41.039688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 19:42:41.039695 | orchestrator | 2025-08-29 19:42:41.039702 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-08-29 19:42:41.039709 | orchestrator | Friday 29 August 2025 19:39:10 +0000 (0:00:03.614) 0:01:28.876 ********* 2025-08-29 19:42:41.039715 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:42:41.039722 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:42:41.039729 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:42:41.039736 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:42:41.039742 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:42:41.039749 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:42:41.039755 | orchestrator | 2025-08-29 19:42:41.039762 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-08-29 19:42:41.039769 | orchestrator | Friday 29 August 2025 19:39:13 +0000 (0:00:02.846) 0:01:31.723 ********* 2025-08-29 19:42:41.039775 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:42:41.039782 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:42:41.039789 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:42:41.039795 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:42:41.039802 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:42:41.039808 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:42:41.039818 | orchestrator | 2025-08-29 19:42:41.039825 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-08-29 19:42:41.039832 | orchestrator | Friday 29 August 2025 19:39:17 +0000 (0:00:03.839) 0:01:35.563 ********* 2025-08-29 19:42:41.039838 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:42:41.039845 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:42:41.039852 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:42:41.039858 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:42:41.039865 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:42:41.039871 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:42:41.039878 | orchestrator | 2025-08-29 19:42:41.039885 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-08-29 19:42:41.039891 | orchestrator | Friday 29 August 2025 19:39:19 +0000 (0:00:01.865) 0:01:37.428 ********* 2025-08-29 19:42:41.039898 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:42:41.039905 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:42:41.039915 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:42:41.039922 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:42:41.039928 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:42:41.039935 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:42:41.039941 | orchestrator | 2025-08-29 19:42:41.039948 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-08-29 19:42:41.039955 | orchestrator | Friday 29 August 2025 19:39:21 +0000 (0:00:01.913) 0:01:39.342 ********* 2025-08-29 19:42:41.039967 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:42:41.039977 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:42:41.039988 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:42:41.039998 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:42:41.040015 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:42:41.040027 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:42:41.040038 | orchestrator | 2025-08-29 19:42:41.040050 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-08-29 19:42:41.040059 | orchestrator | Friday 29 August 2025 19:39:24 +0000 (0:00:03.564) 0:01:42.906 ********* 2025-08-29 19:42:41.040065 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:42:41.040072 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:42:41.040079 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:42:41.040085 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:42:41.040092 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:42:41.040099 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:42:41.040105 | orchestrator | 2025-08-29 19:42:41.040112 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-08-29 19:42:41.040119 | orchestrator | Friday 29 August 2025 19:39:26 +0000 (0:00:02.017) 0:01:44.924 ********* 2025-08-29 19:42:41.040125 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 19:42:41.040132 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:42:41.040142 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 19:42:41.040153 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:42:41.040164 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 19:42:41.040175 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:42:41.040186 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 19:42:41.040197 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:42:41.040208 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 19:42:41.040219 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:42:41.040231 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 19:42:41.040242 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:42:41.040253 | orchestrator | 2025-08-29 19:42:41.040273 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-08-29 19:42:41.040284 | orchestrator | Friday 29 August 2025 19:39:29 +0000 (0:00:02.503) 0:01:47.428 ********* 2025-08-29 19:42:41.040295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 19:42:41.040307 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:42:41.040320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 19:42:41.040355 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:42:41.040382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 19:42:41.040394 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:42:41.040406 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 19:42:41.040418 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:42:41.040429 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 19:42:41.040447 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:42:41.040459 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 19:42:41.040470 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:42:41.040480 | orchestrator | 2025-08-29 19:42:41.040492 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-08-29 19:42:41.040503 | orchestrator | Friday 29 August 2025 19:39:31 +0000 (0:00:02.213) 0:01:49.641 ********* 2025-08-29 19:42:41.040519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 19:42:41.040531 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:42:41.040550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 19:42:41.040562 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:42:41.040574 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 19:42:41.040592 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:42:41.040603 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 19:42:41.040611 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:42:41.040618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 19:42:41.040635 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 19:42:41.040647 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:42:41.040658 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:42:41.040669 | orchestrator | 2025-08-29 19:42:41.040678 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-08-29 19:42:41.040689 | orchestrator | Friday 29 August 2025 19:39:33 +0000 (0:00:02.204) 0:01:51.846 ********* 2025-08-29 19:42:41.040700 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:42:41.040717 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:42:41.040728 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:42:41.040739 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:42:41.040749 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:42:41.040760 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:42:41.040771 | orchestrator | 2025-08-29 19:42:41.040782 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-08-29 19:42:41.040793 | orchestrator | Friday 29 August 2025 19:39:36 +0000 (0:00:02.756) 0:01:54.603 ********* 2025-08-29 19:42:41.040812 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:42:41.040823 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:42:41.040834 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:42:41.040845 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:42:41.040855 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:42:41.040866 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:42:41.040877 | orchestrator | 2025-08-29 19:42:41.040888 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-08-29 19:42:41.040899 | orchestrator | Friday 29 August 2025 19:39:42 +0000 (0:00:05.851) 0:02:00.454 ********* 2025-08-29 19:42:41.040910 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:42:41.040921 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:42:41.040931 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:42:41.040943 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:42:41.040953 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:42:41.040964 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:42:41.040975 | orchestrator | 2025-08-29 19:42:41.040986 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-08-29 19:42:41.040997 | orchestrator | Friday 29 August 2025 19:39:46 +0000 (0:00:04.426) 0:02:04.880 ********* 2025-08-29 19:42:41.041008 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:42:41.041018 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:42:41.041029 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:42:41.041040 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:42:41.041051 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:42:41.041064 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:42:41.041074 | orchestrator | 2025-08-29 19:42:41.041085 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-08-29 19:42:41.041096 | orchestrator | Friday 29 August 2025 19:39:48 +0000 (0:00:02.215) 0:02:07.096 ********* 2025-08-29 19:42:41.041106 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:42:41.041116 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:42:41.041127 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:42:41.041139 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:42:41.041151 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:42:41.041163 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:42:41.041173 | orchestrator | 2025-08-29 19:42:41.041184 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-08-29 19:42:41.041195 | orchestrator | Friday 29 August 2025 19:39:51 +0000 (0:00:02.826) 0:02:09.922 ********* 2025-08-29 19:42:41.041205 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:42:41.041216 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:42:41.041226 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:42:41.041237 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:42:41.041247 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:42:41.041257 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:42:41.041267 | orchestrator | 2025-08-29 19:42:41.041277 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-08-29 19:42:41.041288 | orchestrator | Friday 29 August 2025 19:39:54 +0000 (0:00:03.082) 0:02:13.005 ********* 2025-08-29 19:42:41.041299 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:42:41.041309 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:42:41.041320 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:42:41.041355 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:42:41.041366 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:42:41.041376 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:42:41.041386 | orchestrator | 2025-08-29 19:42:41.041396 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-08-29 19:42:41.041407 | orchestrator | Friday 29 August 2025 19:39:56 +0000 (0:00:01.805) 0:02:14.811 ********* 2025-08-29 19:42:41.041417 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:42:41.041428 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:42:41.041446 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:42:41.041456 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:42:41.041466 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:42:41.041477 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:42:41.041487 | orchestrator | 2025-08-29 19:42:41.041498 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-08-29 19:42:41.041508 | orchestrator | Friday 29 August 2025 19:40:00 +0000 (0:00:03.963) 0:02:18.774 ********* 2025-08-29 19:42:41.041519 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:42:41.041529 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:42:41.041539 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:42:41.041549 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:42:41.041559 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:42:41.041569 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:42:41.041580 | orchestrator | 2025-08-29 19:42:41.041590 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-08-29 19:42:41.041600 | orchestrator | Friday 29 August 2025 19:40:02 +0000 (0:00:02.062) 0:02:20.836 ********* 2025-08-29 19:42:41.041611 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 19:42:41.041623 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:42:41.041639 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 19:42:41.041650 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:42:41.041660 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 19:42:41.041670 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:42:41.041681 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 19:42:41.041691 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:42:41.041707 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 19:42:41.041719 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:42:41.041729 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 19:42:41.041739 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:42:41.041750 | orchestrator | 2025-08-29 19:42:41.041760 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-08-29 19:42:41.041770 | orchestrator | Friday 29 August 2025 19:40:04 +0000 (0:00:02.149) 0:02:22.986 ********* 2025-08-29 19:42:41.041781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 19:42:41.041792 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:42:41.041803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 19:42:41.041820 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:42:41.041830 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 19:42:41.041842 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:42:41.041858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 19:42:41.041870 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:42:41.041889 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 19:42:41.041901 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:42:41.041912 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 19:42:41.041924 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:42:41.041936 | orchestrator | 2025-08-29 19:42:41.041951 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-08-29 19:42:41.041961 | orchestrator | Friday 29 August 2025 19:40:06 +0000 (0:00:01.810) 0:02:24.796 ********* 2025-08-29 19:42:41.041973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 19:42:41.041985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 19:42:41.042051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 19:42:41.042065 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 19:42:41.042073 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 19:42:41.042086 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 19:42:41.042093 | orchestrator | 2025-08-29 19:42:41.042099 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-08-29 19:42:41.042106 | orchestrator | Friday 29 August 2025 19:40:09 +0000 (0:00:03.117) 0:02:27.914 ********* 2025-08-29 19:42:41.042113 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:42:41.042120 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:42:41.042126 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:42:41.042133 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:42:41.042140 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:42:41.042146 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:42:41.042153 | orchestrator | 2025-08-29 19:42:41.042159 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-08-29 19:42:41.042166 | orchestrator | Friday 29 August 2025 19:40:11 +0000 (0:00:01.427) 0:02:29.341 ********* 2025-08-29 19:42:41.042173 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:42:41.042179 | orchestrator | 2025-08-29 19:42:41.042186 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-08-29 19:42:41.042192 | orchestrator | Friday 29 August 2025 19:40:13 +0000 (0:00:02.229) 0:02:31.571 ********* 2025-08-29 19:42:41.042199 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:42:41.042206 | orchestrator | 2025-08-29 19:42:41.042212 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-08-29 19:42:41.042219 | orchestrator | Friday 29 August 2025 19:40:15 +0000 (0:00:02.121) 0:02:33.692 ********* 2025-08-29 19:42:41.042225 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:42:41.042232 | orchestrator | 2025-08-29 19:42:41.042238 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 19:42:41.042245 | orchestrator | Friday 29 August 2025 19:40:59 +0000 (0:00:44.113) 0:03:17.805 ********* 2025-08-29 19:42:41.042252 | orchestrator | 2025-08-29 19:42:41.042263 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 19:42:41.042269 | orchestrator | Friday 29 August 2025 19:40:59 +0000 (0:00:00.065) 0:03:17.870 ********* 2025-08-29 19:42:41.042276 | orchestrator | 2025-08-29 19:42:41.042283 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 19:42:41.042289 | orchestrator | Friday 29 August 2025 19:40:59 +0000 (0:00:00.237) 0:03:18.108 ********* 2025-08-29 19:42:41.042296 | orchestrator | 2025-08-29 19:42:41.042302 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 19:42:41.042309 | orchestrator | Friday 29 August 2025 19:40:59 +0000 (0:00:00.062) 0:03:18.170 ********* 2025-08-29 19:42:41.042316 | orchestrator | 2025-08-29 19:42:41.042346 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 19:42:41.042354 | orchestrator | Friday 29 August 2025 19:41:00 +0000 (0:00:00.066) 0:03:18.236 ********* 2025-08-29 19:42:41.042361 | orchestrator | 2025-08-29 19:42:41.042368 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 19:42:41.042393 | orchestrator | Friday 29 August 2025 19:41:00 +0000 (0:00:00.066) 0:03:18.303 ********* 2025-08-29 19:42:41.042400 | orchestrator | 2025-08-29 19:42:41.042407 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-08-29 19:42:41.042413 | orchestrator | Friday 29 August 2025 19:41:00 +0000 (0:00:00.075) 0:03:18.379 ********* 2025-08-29 19:42:41.042420 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:42:41.042427 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:42:41.042433 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:42:41.042440 | orchestrator | 2025-08-29 19:42:41.042447 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-08-29 19:42:41.042453 | orchestrator | Friday 29 August 2025 19:41:28 +0000 (0:00:28.502) 0:03:46.882 ********* 2025-08-29 19:42:41.042460 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:42:41.042467 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:42:41.042473 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:42:41.042480 | orchestrator | 2025-08-29 19:42:41.042486 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:42:41.042493 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 19:42:41.042502 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-08-29 19:42:41.042509 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-08-29 19:42:41.042516 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 19:42:41.042522 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 19:42:41.042529 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 19:42:41.042536 | orchestrator | 2025-08-29 19:42:41.042542 | orchestrator | 2025-08-29 19:42:41.042549 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:42:41.042556 | orchestrator | Friday 29 August 2025 19:42:38 +0000 (0:01:09.446) 0:04:56.328 ********* 2025-08-29 19:42:41.042563 | orchestrator | =============================================================================== 2025-08-29 19:42:41.042569 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 69.45s 2025-08-29 19:42:41.042576 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 44.11s 2025-08-29 19:42:41.042583 | orchestrator | neutron : Restart neutron-server container ----------------------------- 28.50s 2025-08-29 19:42:41.042589 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.64s 2025-08-29 19:42:41.042596 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.88s 2025-08-29 19:42:41.042602 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 5.86s 2025-08-29 19:42:41.042609 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 5.85s 2025-08-29 19:42:41.042616 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.82s 2025-08-29 19:42:41.042622 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.80s 2025-08-29 19:42:41.042629 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 4.43s 2025-08-29 19:42:41.042635 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 4.35s 2025-08-29 19:42:41.042644 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.98s 2025-08-29 19:42:41.042664 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 3.96s 2025-08-29 19:42:41.042676 | orchestrator | neutron : Copying over openvswitch_agent.ini ---------------------------- 3.84s 2025-08-29 19:42:41.042687 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.72s 2025-08-29 19:42:41.042698 | orchestrator | Setting sysctl values --------------------------------------------------- 3.69s 2025-08-29 19:42:41.042709 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.66s 2025-08-29 19:42:41.042721 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.61s 2025-08-29 19:42:41.042738 | orchestrator | neutron : Copying over eswitchd.conf ------------------------------------ 3.56s 2025-08-29 19:42:41.042751 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.39s 2025-08-29 19:42:41.042763 | orchestrator | 2025-08-29 19:42:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:42:44.059504 | orchestrator | 2025-08-29 19:42:44 | INFO  | Task f515d655-68c7-44f6-989d-51c52e8540b1 is in state SUCCESS 2025-08-29 19:42:44.060637 | orchestrator | 2025-08-29 19:42:44.060684 | orchestrator | 2025-08-29 19:42:44.060697 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 19:42:44.060710 | orchestrator | 2025-08-29 19:42:44.060722 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 19:42:44.060735 | orchestrator | Friday 29 August 2025 19:40:49 +0000 (0:00:00.262) 0:00:00.262 ********* 2025-08-29 19:42:44.060747 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:42:44.060760 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:42:44.060772 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:42:44.060784 | orchestrator | 2025-08-29 19:42:44.060796 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 19:42:44.060808 | orchestrator | Friday 29 August 2025 19:40:49 +0000 (0:00:00.310) 0:00:00.573 ********* 2025-08-29 19:42:44.060820 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-08-29 19:42:44.060833 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-08-29 19:42:44.060844 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-08-29 19:42:44.060857 | orchestrator | 2025-08-29 19:42:44.060868 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-08-29 19:42:44.060880 | orchestrator | 2025-08-29 19:42:44.060891 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-08-29 19:42:44.060903 | orchestrator | Friday 29 August 2025 19:40:49 +0000 (0:00:00.426) 0:00:00.999 ********* 2025-08-29 19:42:44.060915 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:42:44.060928 | orchestrator | 2025-08-29 19:42:44.060939 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-08-29 19:42:44.060951 | orchestrator | Friday 29 August 2025 19:40:50 +0000 (0:00:00.575) 0:00:01.575 ********* 2025-08-29 19:42:44.060963 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-08-29 19:42:44.060975 | orchestrator | 2025-08-29 19:42:44.060987 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-08-29 19:42:44.061000 | orchestrator | Friday 29 August 2025 19:40:53 +0000 (0:00:03.402) 0:00:04.977 ********* 2025-08-29 19:42:44.061012 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-08-29 19:42:44.061023 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-08-29 19:42:44.061035 | orchestrator | 2025-08-29 19:42:44.061047 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-08-29 19:42:44.061059 | orchestrator | Friday 29 August 2025 19:41:00 +0000 (0:00:06.500) 0:00:11.478 ********* 2025-08-29 19:42:44.061070 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 19:42:44.061082 | orchestrator | 2025-08-29 19:42:44.061364 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-08-29 19:42:44.061402 | orchestrator | Friday 29 August 2025 19:41:03 +0000 (0:00:03.246) 0:00:14.725 ********* 2025-08-29 19:42:44.061415 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 19:42:44.061427 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-08-29 19:42:44.061438 | orchestrator | 2025-08-29 19:42:44.061450 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-08-29 19:42:44.061462 | orchestrator | Friday 29 August 2025 19:41:07 +0000 (0:00:03.854) 0:00:18.580 ********* 2025-08-29 19:42:44.061474 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 19:42:44.061486 | orchestrator | 2025-08-29 19:42:44.061498 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-08-29 19:42:44.061510 | orchestrator | Friday 29 August 2025 19:41:10 +0000 (0:00:03.219) 0:00:21.800 ********* 2025-08-29 19:42:44.061522 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-08-29 19:42:44.061534 | orchestrator | 2025-08-29 19:42:44.061546 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-08-29 19:42:44.061557 | orchestrator | Friday 29 August 2025 19:41:14 +0000 (0:00:04.074) 0:00:25.874 ********* 2025-08-29 19:42:44.061569 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:42:44.061581 | orchestrator | 2025-08-29 19:42:44.061593 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-08-29 19:42:44.061604 | orchestrator | Friday 29 August 2025 19:41:17 +0000 (0:00:03.016) 0:00:28.890 ********* 2025-08-29 19:42:44.061616 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:42:44.061628 | orchestrator | 2025-08-29 19:42:44.061639 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-08-29 19:42:44.061651 | orchestrator | Friday 29 August 2025 19:41:21 +0000 (0:00:03.730) 0:00:32.620 ********* 2025-08-29 19:42:44.061663 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:42:44.061674 | orchestrator | 2025-08-29 19:42:44.061686 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-08-29 19:42:44.061697 | orchestrator | Friday 29 August 2025 19:41:24 +0000 (0:00:03.539) 0:00:36.160 ********* 2025-08-29 19:42:44.061743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 19:42:44.061760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 19:42:44.061772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 19:42:44.061798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 19:42:44.061812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 19:42:44.061838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 19:42:44.061852 | orchestrator | 2025-08-29 19:42:44.061864 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-08-29 19:42:44.061876 | orchestrator | Friday 29 August 2025 19:41:26 +0000 (0:00:01.377) 0:00:37.538 ********* 2025-08-29 19:42:44.061888 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:42:44.061899 | orchestrator | 2025-08-29 19:42:44.061911 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-08-29 19:42:44.061922 | orchestrator | Friday 29 August 2025 19:41:26 +0000 (0:00:00.149) 0:00:37.687 ********* 2025-08-29 19:42:44.061934 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:42:44.061947 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:42:44.061959 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:42:44.061971 | orchestrator | 2025-08-29 19:42:44.061983 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-08-29 19:42:44.062002 | orchestrator | Friday 29 August 2025 19:41:27 +0000 (0:00:00.742) 0:00:38.430 ********* 2025-08-29 19:42:44.062068 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 19:42:44.062083 | orchestrator | 2025-08-29 19:42:44.062095 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-08-29 19:42:44.062107 | orchestrator | Friday 29 August 2025 19:41:28 +0000 (0:00:01.078) 0:00:39.508 ********* 2025-08-29 19:42:44.062120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 19:42:44.062161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 19:42:44.062174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 19:42:44.062203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 19:42:44.062216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 19:42:44.062340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 19:42:44.062355 | orchestrator | 2025-08-29 19:42:44.062367 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-08-29 19:42:44.062379 | orchestrator | Friday 29 August 2025 19:41:31 +0000 (0:00:02.860) 0:00:42.369 ********* 2025-08-29 19:42:44.062391 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:42:44.062405 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:42:44.062416 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:42:44.062427 | orchestrator | 2025-08-29 19:42:44.062439 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-08-29 19:42:44.062450 | orchestrator | Friday 29 August 2025 19:41:31 +0000 (0:00:00.393) 0:00:42.762 ********* 2025-08-29 19:42:44.062461 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:42:44.062473 | orchestrator | 2025-08-29 19:42:44.062484 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-08-29 19:42:44.062495 | orchestrator | Friday 29 August 2025 19:41:32 +0000 (0:00:00.922) 0:00:43.685 ********* 2025-08-29 19:42:44.062507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 19:42:44.062532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 19:42:44.062553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 19:42:44.062565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 19:42:44.062577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 19:42:44.062589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 19:42:44.062601 | orchestrator | 2025-08-29 19:42:44.062614 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-08-29 19:42:44.062625 | orchestrator | Friday 29 August 2025 19:41:35 +0000 (0:00:02.975) 0:00:46.660 ********* 2025-08-29 19:42:44.062656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 19:42:44.062683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 19:42:44.062696 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:42:44.062710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 19:42:44.062724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 19:42:44.062736 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:42:44.062748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 19:42:44.062779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 19:42:44.062789 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:42:44.062796 | orchestrator | 2025-08-29 19:42:44.062803 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-08-29 19:42:44.062811 | orchestrator | Friday 29 August 2025 19:41:35 +0000 (0:00:00.538) 0:00:47.199 ********* 2025-08-29 19:42:44.062819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 19:42:44.062826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 19:42:44.062834 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:42:44.062842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 19:42:44.062850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 19:42:44.062862 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:42:44.062880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 19:42:44.062888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 19:42:44.062895 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:42:44.063010 | orchestrator | 2025-08-29 19:42:44.063019 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-08-29 19:42:44.063028 | orchestrator | Friday 29 August 2025 19:41:36 +0000 (0:00:00.821) 0:00:48.020 ********* 2025-08-29 19:42:44.063037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 19:42:44.063046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 19:42:44.063071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 19:42:44.063081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 19:42:44.063090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 19:42:44.063098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 19:42:44.063107 | orchestrator | 2025-08-29 19:42:44.063115 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-08-29 19:42:44.063124 | orchestrator | Friday 29 August 2025 19:41:39 +0000 (0:00:02.632) 0:00:50.653 ********* 2025-08-29 19:42:44.063133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 19:42:44.063155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 19:42:44.063164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 19:42:44.063172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 19:42:44.063179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 19:42:44.063187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 19:42:44.063201 | orchestrator | 2025-08-29 19:42:44.063208 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-08-29 19:42:44.063215 | orchestrator | Friday 29 August 2025 19:41:45 +0000 (0:00:06.092) 0:00:56.745 ********* 2025-08-29 19:42:44.063232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 19:42:44.063241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 19:42:44.063248 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:42:44.063256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 19:42:44.063264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 19:42:44.063276 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:42:44.063284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 19:42:44.063300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 19:42:44.063308 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:42:44.063316 | orchestrator | 2025-08-29 19:42:44.063353 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-08-29 19:42:44.063360 | orchestrator | Friday 29 August 2025 19:41:46 +0000 (0:00:00.721) 0:00:57.466 ********* 2025-08-29 19:42:44.063368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 19:42:44.063376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 19:42:44.063389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 19:42:44.063403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 19:42:44.063417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 19:42:44.063425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 19:42:44.063432 | orchestrator | 2025-08-29 19:42:44.063440 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-08-29 19:42:44.063447 | orchestrator | Friday 29 August 2025 19:41:48 +0000 (0:00:02.707) 0:01:00.174 ********* 2025-08-29 19:42:44.063454 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:42:44.063462 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:42:44.063469 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:42:44.063476 | orchestrator | 2025-08-29 19:42:44.063484 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-08-29 19:42:44.063491 | orchestrator | Friday 29 August 2025 19:41:49 +0000 (0:00:00.330) 0:01:00.505 ********* 2025-08-29 19:42:44.063498 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:42:44.063505 | orchestrator | 2025-08-29 19:42:44.063518 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-08-29 19:42:44.063525 | orchestrator | Friday 29 August 2025 19:41:51 +0000 (0:00:02.445) 0:01:02.950 ********* 2025-08-29 19:42:44.063532 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:42:44.063539 | orchestrator | 2025-08-29 19:42:44.063547 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-08-29 19:42:44.063554 | orchestrator | Friday 29 August 2025 19:41:53 +0000 (0:00:02.168) 0:01:05.119 ********* 2025-08-29 19:42:44.063561 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:42:44.063568 | orchestrator | 2025-08-29 19:42:44.063669 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-08-29 19:42:44.063680 | orchestrator | Friday 29 August 2025 19:42:09 +0000 (0:00:15.168) 0:01:20.287 ********* 2025-08-29 19:42:44.063687 | orchestrator | 2025-08-29 19:42:44.063694 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-08-29 19:42:44.063702 | orchestrator | Friday 29 August 2025 19:42:09 +0000 (0:00:00.088) 0:01:20.376 ********* 2025-08-29 19:42:44.063709 | orchestrator | 2025-08-29 19:42:44.063716 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-08-29 19:42:44.063723 | orchestrator | Friday 29 August 2025 19:42:09 +0000 (0:00:00.073) 0:01:20.450 ********* 2025-08-29 19:42:44.063730 | orchestrator | 2025-08-29 19:42:44.063737 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-08-29 19:42:44.063744 | orchestrator | Friday 29 August 2025 19:42:09 +0000 (0:00:00.073) 0:01:20.524 ********* 2025-08-29 19:42:44.063752 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:42:44.063759 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:42:44.063766 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:42:44.063773 | orchestrator | 2025-08-29 19:42:44.063780 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-08-29 19:42:44.063788 | orchestrator | Friday 29 August 2025 19:42:27 +0000 (0:00:18.472) 0:01:38.996 ********* 2025-08-29 19:42:44.063795 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:42:44.063802 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:42:44.063809 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:42:44.063816 | orchestrator | 2025-08-29 19:42:44.063824 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:42:44.063831 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 19:42:44.063840 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 19:42:44.063848 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 19:42:44.063855 | orchestrator | 2025-08-29 19:42:44.063862 | orchestrator | 2025-08-29 19:42:44.063874 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:42:44.063881 | orchestrator | Friday 29 August 2025 19:42:42 +0000 (0:00:14.275) 0:01:53.272 ********* 2025-08-29 19:42:44.063889 | orchestrator | =============================================================================== 2025-08-29 19:42:44.063896 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 18.47s 2025-08-29 19:42:44.063909 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.17s 2025-08-29 19:42:44.063916 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 14.28s 2025-08-29 19:42:44.063923 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.50s 2025-08-29 19:42:44.063931 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 6.09s 2025-08-29 19:42:44.063938 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.07s 2025-08-29 19:42:44.063945 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.85s 2025-08-29 19:42:44.063957 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.73s 2025-08-29 19:42:44.063965 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.54s 2025-08-29 19:42:44.063972 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.40s 2025-08-29 19:42:44.063979 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.25s 2025-08-29 19:42:44.063986 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.22s 2025-08-29 19:42:44.063994 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.02s 2025-08-29 19:42:44.064006 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.98s 2025-08-29 19:42:44.064018 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.86s 2025-08-29 19:42:44.064029 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.71s 2025-08-29 19:42:44.064040 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.63s 2025-08-29 19:42:44.064051 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.45s 2025-08-29 19:42:44.064063 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.17s 2025-08-29 19:42:44.064074 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.38s 2025-08-29 19:42:44.064085 | orchestrator | 2025-08-29 19:42:44 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:42:44.064097 | orchestrator | 2025-08-29 19:42:44 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:42:44.064109 | orchestrator | 2025-08-29 19:42:44 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:42:44.064121 | orchestrator | 2025-08-29 19:42:44 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:42:44.064134 | orchestrator | 2025-08-29 19:42:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:42:47.096301 | orchestrator | 2025-08-29 19:42:47 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:42:47.096670 | orchestrator | 2025-08-29 19:42:47 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:42:47.097231 | orchestrator | 2025-08-29 19:42:47 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:42:47.097948 | orchestrator | 2025-08-29 19:42:47 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:42:47.097980 | orchestrator | 2025-08-29 19:42:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:42:50.117084 | orchestrator | 2025-08-29 19:42:50 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:42:50.117386 | orchestrator | 2025-08-29 19:42:50 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:42:50.118045 | orchestrator | 2025-08-29 19:42:50 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:42:50.118608 | orchestrator | 2025-08-29 19:42:50 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:42:50.118631 | orchestrator | 2025-08-29 19:42:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:42:53.143539 | orchestrator | 2025-08-29 19:42:53 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:42:53.143655 | orchestrator | 2025-08-29 19:42:53 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:42:53.144265 | orchestrator | 2025-08-29 19:42:53 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:42:53.144794 | orchestrator | 2025-08-29 19:42:53 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:42:53.144834 | orchestrator | 2025-08-29 19:42:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:42:56.168975 | orchestrator | 2025-08-29 19:42:56 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:42:56.169148 | orchestrator | 2025-08-29 19:42:56 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:42:56.169791 | orchestrator | 2025-08-29 19:42:56 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:42:56.170464 | orchestrator | 2025-08-29 19:42:56 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:42:56.170486 | orchestrator | 2025-08-29 19:42:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:42:59.193765 | orchestrator | 2025-08-29 19:42:59 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:42:59.193863 | orchestrator | 2025-08-29 19:42:59 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:42:59.194083 | orchestrator | 2025-08-29 19:42:59 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:42:59.194735 | orchestrator | 2025-08-29 19:42:59 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:42:59.194749 | orchestrator | 2025-08-29 19:42:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:43:02.219750 | orchestrator | 2025-08-29 19:43:02 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:43:02.219852 | orchestrator | 2025-08-29 19:43:02 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:43:02.220490 | orchestrator | 2025-08-29 19:43:02 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:43:02.221099 | orchestrator | 2025-08-29 19:43:02 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:43:02.221124 | orchestrator | 2025-08-29 19:43:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:43:05.254249 | orchestrator | 2025-08-29 19:43:05 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:43:05.254533 | orchestrator | 2025-08-29 19:43:05 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:43:05.256009 | orchestrator | 2025-08-29 19:43:05 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:43:05.256696 | orchestrator | 2025-08-29 19:43:05 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:43:05.256733 | orchestrator | 2025-08-29 19:43:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:43:08.287924 | orchestrator | 2025-08-29 19:43:08 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:43:08.288042 | orchestrator | 2025-08-29 19:43:08 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:43:08.288058 | orchestrator | 2025-08-29 19:43:08 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:43:08.289120 | orchestrator | 2025-08-29 19:43:08 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:43:08.289161 | orchestrator | 2025-08-29 19:43:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:43:11.326456 | orchestrator | 2025-08-29 19:43:11 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:43:11.326529 | orchestrator | 2025-08-29 19:43:11 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:43:11.327453 | orchestrator | 2025-08-29 19:43:11 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:43:11.329038 | orchestrator | 2025-08-29 19:43:11 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:43:11.329104 | orchestrator | 2025-08-29 19:43:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:43:14.361108 | orchestrator | 2025-08-29 19:43:14 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:43:14.361514 | orchestrator | 2025-08-29 19:43:14 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:43:14.362352 | orchestrator | 2025-08-29 19:43:14 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:43:14.363014 | orchestrator | 2025-08-29 19:43:14 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:43:14.363041 | orchestrator | 2025-08-29 19:43:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:43:17.397634 | orchestrator | 2025-08-29 19:43:17 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:43:17.400102 | orchestrator | 2025-08-29 19:43:17 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:43:17.402691 | orchestrator | 2025-08-29 19:43:17 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:43:17.404408 | orchestrator | 2025-08-29 19:43:17 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:43:17.404457 | orchestrator | 2025-08-29 19:43:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:43:20.445930 | orchestrator | 2025-08-29 19:43:20 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:43:20.448703 | orchestrator | 2025-08-29 19:43:20 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:43:20.452232 | orchestrator | 2025-08-29 19:43:20 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:43:20.454065 | orchestrator | 2025-08-29 19:43:20 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:43:20.454118 | orchestrator | 2025-08-29 19:43:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:43:23.497140 | orchestrator | 2025-08-29 19:43:23 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:43:23.499072 | orchestrator | 2025-08-29 19:43:23 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:43:23.500195 | orchestrator | 2025-08-29 19:43:23 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:43:23.501698 | orchestrator | 2025-08-29 19:43:23 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:43:23.501833 | orchestrator | 2025-08-29 19:43:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:43:26.550801 | orchestrator | 2025-08-29 19:43:26 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:43:26.550893 | orchestrator | 2025-08-29 19:43:26 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:43:26.550900 | orchestrator | 2025-08-29 19:43:26 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:43:26.551764 | orchestrator | 2025-08-29 19:43:26 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:43:26.551785 | orchestrator | 2025-08-29 19:43:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:43:29.586451 | orchestrator | 2025-08-29 19:43:29 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:43:29.587774 | orchestrator | 2025-08-29 19:43:29 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:43:29.588844 | orchestrator | 2025-08-29 19:43:29 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:43:29.590223 | orchestrator | 2025-08-29 19:43:29 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:43:29.590284 | orchestrator | 2025-08-29 19:43:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:43:32.651930 | orchestrator | 2025-08-29 19:43:32 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:43:32.654135 | orchestrator | 2025-08-29 19:43:32 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:43:32.656125 | orchestrator | 2025-08-29 19:43:32 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:43:32.657702 | orchestrator | 2025-08-29 19:43:32 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:43:32.657793 | orchestrator | 2025-08-29 19:43:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:43:35.698181 | orchestrator | 2025-08-29 19:43:35 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:43:35.701039 | orchestrator | 2025-08-29 19:43:35 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:43:35.702839 | orchestrator | 2025-08-29 19:43:35 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:43:35.704782 | orchestrator | 2025-08-29 19:43:35 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:43:35.704839 | orchestrator | 2025-08-29 19:43:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:43:38.750550 | orchestrator | 2025-08-29 19:43:38 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:43:38.752189 | orchestrator | 2025-08-29 19:43:38 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:43:38.753547 | orchestrator | 2025-08-29 19:43:38 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:43:38.755184 | orchestrator | 2025-08-29 19:43:38 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:43:38.755317 | orchestrator | 2025-08-29 19:43:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:43:41.794430 | orchestrator | 2025-08-29 19:43:41 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:43:41.795066 | orchestrator | 2025-08-29 19:43:41 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:43:41.796513 | orchestrator | 2025-08-29 19:43:41 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:43:41.798070 | orchestrator | 2025-08-29 19:43:41 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:43:41.798128 | orchestrator | 2025-08-29 19:43:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:43:44.832471 | orchestrator | 2025-08-29 19:43:44 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:43:44.834211 | orchestrator | 2025-08-29 19:43:44 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:43:44.836381 | orchestrator | 2025-08-29 19:43:44 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:43:44.839747 | orchestrator | 2025-08-29 19:43:44 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:43:44.839975 | orchestrator | 2025-08-29 19:43:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:43:47.877740 | orchestrator | 2025-08-29 19:43:47 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:43:47.878594 | orchestrator | 2025-08-29 19:43:47 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:43:47.880435 | orchestrator | 2025-08-29 19:43:47 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:43:47.882079 | orchestrator | 2025-08-29 19:43:47 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:43:47.882425 | orchestrator | 2025-08-29 19:43:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:43:50.925391 | orchestrator | 2025-08-29 19:43:50 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:43:50.928866 | orchestrator | 2025-08-29 19:43:50 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:43:50.930507 | orchestrator | 2025-08-29 19:43:50 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:43:50.932363 | orchestrator | 2025-08-29 19:43:50 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:43:50.932408 | orchestrator | 2025-08-29 19:43:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:43:53.980817 | orchestrator | 2025-08-29 19:43:53 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:43:53.982655 | orchestrator | 2025-08-29 19:43:53 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:43:53.985429 | orchestrator | 2025-08-29 19:43:53 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:43:53.987351 | orchestrator | 2025-08-29 19:43:53 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:43:53.987578 | orchestrator | 2025-08-29 19:43:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:43:57.055127 | orchestrator | 2025-08-29 19:43:57 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:43:57.055462 | orchestrator | 2025-08-29 19:43:57 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:43:57.058611 | orchestrator | 2025-08-29 19:43:57 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:43:57.061675 | orchestrator | 2025-08-29 19:43:57 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:43:57.061849 | orchestrator | 2025-08-29 19:43:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:44:00.096026 | orchestrator | 2025-08-29 19:44:00 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:44:00.099077 | orchestrator | 2025-08-29 19:44:00 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:44:00.099155 | orchestrator | 2025-08-29 19:44:00 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:44:00.100001 | orchestrator | 2025-08-29 19:44:00 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:44:00.100038 | orchestrator | 2025-08-29 19:44:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:44:03.168806 | orchestrator | 2025-08-29 19:44:03 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:44:03.169641 | orchestrator | 2025-08-29 19:44:03 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:44:03.170740 | orchestrator | 2025-08-29 19:44:03 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:44:03.171784 | orchestrator | 2025-08-29 19:44:03 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:44:03.171959 | orchestrator | 2025-08-29 19:44:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:44:06.209822 | orchestrator | 2025-08-29 19:44:06 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:44:06.212699 | orchestrator | 2025-08-29 19:44:06 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:44:06.215119 | orchestrator | 2025-08-29 19:44:06 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:44:06.216942 | orchestrator | 2025-08-29 19:44:06 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:44:06.217351 | orchestrator | 2025-08-29 19:44:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:44:09.249107 | orchestrator | 2025-08-29 19:44:09 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:44:09.249464 | orchestrator | 2025-08-29 19:44:09 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:44:09.251474 | orchestrator | 2025-08-29 19:44:09 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:44:09.252400 | orchestrator | 2025-08-29 19:44:09 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:44:09.252687 | orchestrator | 2025-08-29 19:44:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:44:12.293266 | orchestrator | 2025-08-29 19:44:12 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:44:12.293978 | orchestrator | 2025-08-29 19:44:12 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:44:12.294944 | orchestrator | 2025-08-29 19:44:12 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:44:12.295737 | orchestrator | 2025-08-29 19:44:12 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:44:12.295773 | orchestrator | 2025-08-29 19:44:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:44:15.337560 | orchestrator | 2025-08-29 19:44:15 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:44:15.338412 | orchestrator | 2025-08-29 19:44:15 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:44:15.339492 | orchestrator | 2025-08-29 19:44:15 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:44:15.342814 | orchestrator | 2025-08-29 19:44:15 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:44:15.342923 | orchestrator | 2025-08-29 19:44:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:44:18.396716 | orchestrator | 2025-08-29 19:44:18 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:44:18.399991 | orchestrator | 2025-08-29 19:44:18 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:44:18.402887 | orchestrator | 2025-08-29 19:44:18 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:44:18.404889 | orchestrator | 2025-08-29 19:44:18 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:44:18.404933 | orchestrator | 2025-08-29 19:44:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:44:21.450723 | orchestrator | 2025-08-29 19:44:21 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:44:21.452480 | orchestrator | 2025-08-29 19:44:21 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:44:21.454117 | orchestrator | 2025-08-29 19:44:21 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:44:21.456812 | orchestrator | 2025-08-29 19:44:21 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:44:21.456829 | orchestrator | 2025-08-29 19:44:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:44:24.506259 | orchestrator | 2025-08-29 19:44:24 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:44:24.507601 | orchestrator | 2025-08-29 19:44:24 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:44:24.509367 | orchestrator | 2025-08-29 19:44:24 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:44:24.511210 | orchestrator | 2025-08-29 19:44:24 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:44:24.511274 | orchestrator | 2025-08-29 19:44:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:44:27.554577 | orchestrator | 2025-08-29 19:44:27 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:44:27.555774 | orchestrator | 2025-08-29 19:44:27 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:44:27.558246 | orchestrator | 2025-08-29 19:44:27 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:44:27.559992 | orchestrator | 2025-08-29 19:44:27 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:44:27.560026 | orchestrator | 2025-08-29 19:44:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:44:30.616216 | orchestrator | 2025-08-29 19:44:30 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state STARTED 2025-08-29 19:44:30.618319 | orchestrator | 2025-08-29 19:44:30 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:44:30.620501 | orchestrator | 2025-08-29 19:44:30 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:44:30.622393 | orchestrator | 2025-08-29 19:44:30 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:44:30.622443 | orchestrator | 2025-08-29 19:44:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:44:33.676206 | orchestrator | 2025-08-29 19:44:33.676304 | orchestrator | 2025-08-29 19:44:33.676318 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 19:44:33.676327 | orchestrator | 2025-08-29 19:44:33.676335 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 19:44:33.676343 | orchestrator | Friday 29 August 2025 19:41:42 +0000 (0:00:00.255) 0:00:00.255 ********* 2025-08-29 19:44:33.676351 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:44:33.676359 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:44:33.676365 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:44:33.676372 | orchestrator | 2025-08-29 19:44:33.676377 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 19:44:33.676384 | orchestrator | Friday 29 August 2025 19:41:42 +0000 (0:00:00.270) 0:00:00.526 ********* 2025-08-29 19:44:33.676390 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-08-29 19:44:33.676397 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-08-29 19:44:33.676404 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-08-29 19:44:33.676410 | orchestrator | 2025-08-29 19:44:33.676416 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-08-29 19:44:33.676423 | orchestrator | 2025-08-29 19:44:33.676458 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-08-29 19:44:33.676465 | orchestrator | Friday 29 August 2025 19:41:42 +0000 (0:00:00.393) 0:00:00.919 ********* 2025-08-29 19:44:33.676471 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:44:33.676478 | orchestrator | 2025-08-29 19:44:33.676491 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-08-29 19:44:33.676498 | orchestrator | Friday 29 August 2025 19:41:43 +0000 (0:00:00.983) 0:00:01.902 ********* 2025-08-29 19:44:33.676504 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-08-29 19:44:33.676509 | orchestrator | 2025-08-29 19:44:33.676515 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-08-29 19:44:33.676522 | orchestrator | Friday 29 August 2025 19:41:47 +0000 (0:00:03.517) 0:00:05.420 ********* 2025-08-29 19:44:33.676529 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-08-29 19:44:33.676536 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-08-29 19:44:33.676544 | orchestrator | 2025-08-29 19:44:33.676551 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-08-29 19:44:33.676558 | orchestrator | Friday 29 August 2025 19:41:54 +0000 (0:00:06.630) 0:00:12.050 ********* 2025-08-29 19:44:33.676565 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 19:44:33.676573 | orchestrator | 2025-08-29 19:44:33.676580 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-08-29 19:44:33.676588 | orchestrator | Friday 29 August 2025 19:41:57 +0000 (0:00:03.066) 0:00:15.116 ********* 2025-08-29 19:44:33.676609 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 19:44:33.676617 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-08-29 19:44:33.676624 | orchestrator | 2025-08-29 19:44:33.676631 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-08-29 19:44:33.676638 | orchestrator | Friday 29 August 2025 19:42:00 +0000 (0:00:03.648) 0:00:18.765 ********* 2025-08-29 19:44:33.676646 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 19:44:33.676654 | orchestrator | 2025-08-29 19:44:33.676661 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-08-29 19:44:33.676668 | orchestrator | Friday 29 August 2025 19:42:04 +0000 (0:00:03.265) 0:00:22.030 ********* 2025-08-29 19:44:33.676675 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-08-29 19:44:33.676680 | orchestrator | 2025-08-29 19:44:33.676686 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-08-29 19:44:33.676693 | orchestrator | Friday 29 August 2025 19:42:08 +0000 (0:00:04.240) 0:00:26.271 ********* 2025-08-29 19:44:33.676719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 19:44:33.676743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 19:44:33.676753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 19:44:33.676761 | orchestrator | 2025-08-29 19:44:33.676768 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-08-29 19:44:33.676780 | orchestrator | Friday 29 August 2025 19:42:12 +0000 (0:00:04.366) 0:00:30.637 ********* 2025-08-29 19:44:33.676787 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:44:33.676794 | orchestrator | 2025-08-29 19:44:33.676808 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-08-29 19:44:33.676814 | orchestrator | Friday 29 August 2025 19:42:13 +0000 (0:00:00.735) 0:00:31.373 ********* 2025-08-29 19:44:33.676820 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:44:33.676826 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:44:33.676832 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:44:33.676837 | orchestrator | 2025-08-29 19:44:33.676843 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-08-29 19:44:33.676849 | orchestrator | Friday 29 August 2025 19:42:17 +0000 (0:00:04.570) 0:00:35.943 ********* 2025-08-29 19:44:33.676855 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 19:44:33.676861 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 19:44:33.676867 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 19:44:33.676874 | orchestrator | 2025-08-29 19:44:33.676880 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-08-29 19:44:33.676887 | orchestrator | Friday 29 August 2025 19:42:19 +0000 (0:00:01.570) 0:00:37.514 ********* 2025-08-29 19:44:33.676893 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 19:44:33.676899 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 19:44:33.676906 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 19:44:33.676914 | orchestrator | 2025-08-29 19:44:33.676920 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-08-29 19:44:33.676926 | orchestrator | Friday 29 August 2025 19:42:20 +0000 (0:00:01.315) 0:00:38.830 ********* 2025-08-29 19:44:33.676932 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:44:33.676939 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:44:33.676946 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:44:33.676953 | orchestrator | 2025-08-29 19:44:33.676959 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-08-29 19:44:33.676965 | orchestrator | Friday 29 August 2025 19:42:21 +0000 (0:00:00.654) 0:00:39.485 ********* 2025-08-29 19:44:33.676971 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:44:33.676977 | orchestrator | 2025-08-29 19:44:33.676983 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-08-29 19:44:33.676989 | orchestrator | Friday 29 August 2025 19:42:21 +0000 (0:00:00.350) 0:00:39.835 ********* 2025-08-29 19:44:33.676995 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:44:33.677003 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:44:33.677011 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:44:33.677018 | orchestrator | 2025-08-29 19:44:33.677030 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-08-29 19:44:33.677037 | orchestrator | Friday 29 August 2025 19:42:22 +0000 (0:00:00.327) 0:00:40.163 ********* 2025-08-29 19:44:33.677043 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:44:33.677049 | orchestrator | 2025-08-29 19:44:33.677056 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-08-29 19:44:33.677063 | orchestrator | Friday 29 August 2025 19:42:22 +0000 (0:00:00.533) 0:00:40.697 ********* 2025-08-29 19:44:33.677078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 19:44:33.677093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 19:44:33.677105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 19:44:33.677120 | orchestrator | 2025-08-29 19:44:33.677127 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-08-29 19:44:33.677133 | orchestrator | Friday 29 August 2025 19:42:27 +0000 (0:00:04.355) 0:00:45.053 ********* 2025-08-29 19:44:33.677207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 19:44:33.677217 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:44:33.677231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 19:44:33.677244 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:44:33.677258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 19:44:33.677266 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:44:33.677273 | orchestrator | 2025-08-29 19:44:33.677281 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-08-29 19:44:33.677288 | orchestrator | Friday 29 August 2025 19:42:32 +0000 (0:00:05.526) 0:00:50.579 ********* 2025-08-29 19:44:33.677301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 19:44:33.677315 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:44:33.677329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 19:44:33.677337 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:44:33.677345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 19:44:33.677353 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:44:33.677373 | orchestrator | 2025-08-29 19:44:33.677388 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-08-29 19:44:33.677395 | orchestrator | Friday 29 August 2025 19:42:35 +0000 (0:00:03.247) 0:00:53.827 ********* 2025-08-29 19:44:33.677402 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:44:33.677409 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:44:33.677416 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:44:33.677422 | orchestrator | 2025-08-29 19:44:33.677429 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-08-29 19:44:33.677436 | orchestrator | Friday 29 August 2025 19:42:39 +0000 (0:00:03.919) 0:00:57.747 ********* 2025-08-29 19:44:33.677448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 19:44:33.677457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 19:44:33.677474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 19:44:33.677483 | orchestrator | 2025-08-29 19:44:33.677490 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-08-29 19:44:33.677497 | orchestrator | Friday 29 August 2025 19:42:45 +0000 (0:00:05.569) 0:01:03.316 ********* 2025-08-29 19:44:33.677504 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:44:33.677511 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:44:33.677518 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:44:33.677525 | orchestrator | 2025-08-29 19:44:33.677532 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-08-29 19:44:33.677539 | orchestrator | Friday 29 August 2025 19:42:54 +0000 (0:00:09.253) 0:01:12.570 ********* 2025-08-29 19:44:33.677546 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:44:33.677553 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:44:33.677560 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:44:33.677567 | orchestrator | 2025-08-29 19:44:33.677575 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-08-29 19:44:33.677588 | orchestrator | Friday 29 Augu2025-08-29 19:44:33 | INFO  | Task edb6af18-7336-4f36-872b-397e7fd1de6d is in state SUCCESS 2025-08-29 19:44:33.677596 | orchestrator | st 2025 19:42:58 +0000 (0:00:04.289) 0:01:16.859 ********* 2025-08-29 19:44:33.677603 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:44:33.677610 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:44:33.677617 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:44:33.677624 | orchestrator | 2025-08-29 19:44:33.677632 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-08-29 19:44:33.677639 | orchestrator | Friday 29 August 2025 19:43:03 +0000 (0:00:04.214) 0:01:21.073 ********* 2025-08-29 19:44:33.677646 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:44:33.677653 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:44:33.677660 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:44:33.677666 | orchestrator | 2025-08-29 19:44:33.677673 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-08-29 19:44:33.677679 | orchestrator | Friday 29 August 2025 19:43:06 +0000 (0:00:03.499) 0:01:24.573 ********* 2025-08-29 19:44:33.677686 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:44:33.677698 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:44:33.677705 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:44:33.677713 | orchestrator | 2025-08-29 19:44:33.677719 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-08-29 19:44:33.677725 | orchestrator | Friday 29 August 2025 19:43:09 +0000 (0:00:02.926) 0:01:27.500 ********* 2025-08-29 19:44:33.677731 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:44:33.677737 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:44:33.677744 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:44:33.677750 | orchestrator | 2025-08-29 19:44:33.677755 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-08-29 19:44:33.677760 | orchestrator | Friday 29 August 2025 19:43:09 +0000 (0:00:00.294) 0:01:27.794 ********* 2025-08-29 19:44:33.677767 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-08-29 19:44:33.677773 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:44:33.677779 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-08-29 19:44:33.677785 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:44:33.677791 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-08-29 19:44:33.677797 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:44:33.677802 | orchestrator | 2025-08-29 19:44:33.677808 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-08-29 19:44:33.677815 | orchestrator | Friday 29 August 2025 19:43:12 +0000 (0:00:02.781) 0:01:30.576 ********* 2025-08-29 19:44:33.677828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 19:44:33.677844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 19:44:33.677861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 19:44:33.677868 | orchestrator | 2025-08-29 19:44:33.677874 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-08-29 19:44:33.677881 | orchestrator | Friday 29 August 2025 19:43:16 +0000 (0:00:04.195) 0:01:34.771 ********* 2025-08-29 19:44:33.677887 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:44:33.677894 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:44:33.677899 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:44:33.677905 | orchestrator | 2025-08-29 19:44:33.677911 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-08-29 19:44:33.677918 | orchestrator | Friday 29 August 2025 19:43:17 +0000 (0:00:00.316) 0:01:35.087 ********* 2025-08-29 19:44:33.677924 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:44:33.677930 | orchestrator | 2025-08-29 19:44:33.677936 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-08-29 19:44:33.677942 | orchestrator | Friday 29 August 2025 19:43:19 +0000 (0:00:02.305) 0:01:37.393 ********* 2025-08-29 19:44:33.677948 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:44:33.677954 | orchestrator | 2025-08-29 19:44:33.677960 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-08-29 19:44:33.677967 | orchestrator | Friday 29 August 2025 19:43:21 +0000 (0:00:02.349) 0:01:39.742 ********* 2025-08-29 19:44:33.677980 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:44:33.677985 | orchestrator | 2025-08-29 19:44:33.677991 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-08-29 19:44:33.677997 | orchestrator | Friday 29 August 2025 19:43:24 +0000 (0:00:02.288) 0:01:42.030 ********* 2025-08-29 19:44:33.678003 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:44:33.678009 | orchestrator | 2025-08-29 19:44:33.678068 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-08-29 19:44:33.678084 | orchestrator | Friday 29 August 2025 19:43:51 +0000 (0:00:27.920) 0:02:09.951 ********* 2025-08-29 19:44:33.678090 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:44:33.678097 | orchestrator | 2025-08-29 19:44:33.678103 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-08-29 19:44:33.678109 | orchestrator | Friday 29 August 2025 19:43:54 +0000 (0:00:02.356) 0:02:12.308 ********* 2025-08-29 19:44:33.678114 | orchestrator | 2025-08-29 19:44:33.678121 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-08-29 19:44:33.678127 | orchestrator | Friday 29 August 2025 19:43:54 +0000 (0:00:00.066) 0:02:12.375 ********* 2025-08-29 19:44:33.678133 | orchestrator | 2025-08-29 19:44:33.678189 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-08-29 19:44:33.678197 | orchestrator | Friday 29 August 2025 19:43:54 +0000 (0:00:00.084) 0:02:12.459 ********* 2025-08-29 19:44:33.678202 | orchestrator | 2025-08-29 19:44:33.678208 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-08-29 19:44:33.678214 | orchestrator | Friday 29 August 2025 19:43:54 +0000 (0:00:00.066) 0:02:12.526 ********* 2025-08-29 19:44:33.678220 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:44:33.678225 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:44:33.678231 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:44:33.678237 | orchestrator | 2025-08-29 19:44:33.678243 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:44:33.678249 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-08-29 19:44:33.678257 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 19:44:33.678264 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 19:44:33.678270 | orchestrator | 2025-08-29 19:44:33.678275 | orchestrator | 2025-08-29 19:44:33.678281 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:44:33.678288 | orchestrator | Friday 29 August 2025 19:44:32 +0000 (0:00:37.940) 0:02:50.466 ********* 2025-08-29 19:44:33.678295 | orchestrator | =============================================================================== 2025-08-29 19:44:33.678301 | orchestrator | glance : Restart glance-api container ---------------------------------- 37.94s 2025-08-29 19:44:33.678308 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 27.92s 2025-08-29 19:44:33.678314 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 9.25s 2025-08-29 19:44:33.678320 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.63s 2025-08-29 19:44:33.678332 | orchestrator | glance : Copying over config.json files for services -------------------- 5.57s 2025-08-29 19:44:33.678339 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 5.53s 2025-08-29 19:44:33.678345 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.57s 2025-08-29 19:44:33.678352 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.37s 2025-08-29 19:44:33.678358 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.36s 2025-08-29 19:44:33.678364 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.29s 2025-08-29 19:44:33.678377 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.24s 2025-08-29 19:44:33.678384 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.21s 2025-08-29 19:44:33.678391 | orchestrator | glance : Check glance containers ---------------------------------------- 4.20s 2025-08-29 19:44:33.678398 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.92s 2025-08-29 19:44:33.678404 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.65s 2025-08-29 19:44:33.678411 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.52s 2025-08-29 19:44:33.678417 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.50s 2025-08-29 19:44:33.678425 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.27s 2025-08-29 19:44:33.678432 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.25s 2025-08-29 19:44:33.678439 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.07s 2025-08-29 19:44:33.678446 | orchestrator | 2025-08-29 19:44:33 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:44:33.678453 | orchestrator | 2025-08-29 19:44:33 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:44:33.680425 | orchestrator | 2025-08-29 19:44:33 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:44:33.680503 | orchestrator | 2025-08-29 19:44:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:44:36.731615 | orchestrator | 2025-08-29 19:44:36 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:44:36.733752 | orchestrator | 2025-08-29 19:44:36 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:44:36.735563 | orchestrator | 2025-08-29 19:44:36 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:44:36.737036 | orchestrator | 2025-08-29 19:44:36 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:44:36.737079 | orchestrator | 2025-08-29 19:44:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:44:39.778985 | orchestrator | 2025-08-29 19:44:39 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:44:39.781577 | orchestrator | 2025-08-29 19:44:39 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:44:39.782992 | orchestrator | 2025-08-29 19:44:39 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:44:39.784598 | orchestrator | 2025-08-29 19:44:39 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:44:39.784648 | orchestrator | 2025-08-29 19:44:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:44:42.824008 | orchestrator | 2025-08-29 19:44:42 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:44:42.827227 | orchestrator | 2025-08-29 19:44:42 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:44:42.830697 | orchestrator | 2025-08-29 19:44:42 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:44:42.833261 | orchestrator | 2025-08-29 19:44:42 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:44:42.833350 | orchestrator | 2025-08-29 19:44:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:44:45.863488 | orchestrator | 2025-08-29 19:44:45 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:44:45.863717 | orchestrator | 2025-08-29 19:44:45 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:44:45.864263 | orchestrator | 2025-08-29 19:44:45 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:44:45.865170 | orchestrator | 2025-08-29 19:44:45 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:44:45.865201 | orchestrator | 2025-08-29 19:44:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:44:48.908983 | orchestrator | 2025-08-29 19:44:48 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:44:48.909451 | orchestrator | 2025-08-29 19:44:48 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:44:48.912492 | orchestrator | 2025-08-29 19:44:48 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:44:48.914052 | orchestrator | 2025-08-29 19:44:48 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:44:48.914107 | orchestrator | 2025-08-29 19:44:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:44:51.953792 | orchestrator | 2025-08-29 19:44:51 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:44:51.956070 | orchestrator | 2025-08-29 19:44:51 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:44:51.958002 | orchestrator | 2025-08-29 19:44:51 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:44:51.959522 | orchestrator | 2025-08-29 19:44:51 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:44:51.959686 | orchestrator | 2025-08-29 19:44:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:44:55.001900 | orchestrator | 2025-08-29 19:44:54 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:44:55.012037 | orchestrator | 2025-08-29 19:44:55 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:44:55.012210 | orchestrator | 2025-08-29 19:44:55 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state STARTED 2025-08-29 19:44:55.012765 | orchestrator | 2025-08-29 19:44:55 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:44:55.012889 | orchestrator | 2025-08-29 19:44:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:44:58.052018 | orchestrator | 2025-08-29 19:44:58 | INFO  | Task d61875ec-353a-4544-8f79-f332627f9f09 is in state STARTED 2025-08-29 19:44:58.052537 | orchestrator | 2025-08-29 19:44:58 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:44:58.053474 | orchestrator | 2025-08-29 19:44:58 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:44:58.055550 | orchestrator | 2025-08-29 19:44:58 | INFO  | Task 7015eda8-112a-4832-98e4-988be6381900 is in state SUCCESS 2025-08-29 19:44:58.057014 | orchestrator | 2025-08-29 19:44:58.057044 | orchestrator | 2025-08-29 19:44:58.057049 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 19:44:58.057056 | orchestrator | 2025-08-29 19:44:58.057060 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 19:44:58.057064 | orchestrator | Friday 29 August 2025 19:41:43 +0000 (0:00:00.281) 0:00:00.281 ********* 2025-08-29 19:44:58.057068 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:44:58.057074 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:44:58.057077 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:44:58.057081 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:44:58.057085 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:44:58.057089 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:44:58.057093 | orchestrator | 2025-08-29 19:44:58.057137 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 19:44:58.057142 | orchestrator | Friday 29 August 2025 19:41:44 +0000 (0:00:00.999) 0:00:01.280 ********* 2025-08-29 19:44:58.057145 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-08-29 19:44:58.057150 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-08-29 19:44:58.057156 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-08-29 19:44:58.057162 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-08-29 19:44:58.057167 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-08-29 19:44:58.057173 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-08-29 19:44:58.057183 | orchestrator | 2025-08-29 19:44:58.057191 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-08-29 19:44:58.057196 | orchestrator | 2025-08-29 19:44:58.057203 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-08-29 19:44:58.057209 | orchestrator | Friday 29 August 2025 19:41:45 +0000 (0:00:00.628) 0:00:01.909 ********* 2025-08-29 19:44:58.057215 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:44:58.057224 | orchestrator | 2025-08-29 19:44:58.057231 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-08-29 19:44:58.057237 | orchestrator | Friday 29 August 2025 19:41:46 +0000 (0:00:01.253) 0:00:03.162 ********* 2025-08-29 19:44:58.057245 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-08-29 19:44:58.057250 | orchestrator | 2025-08-29 19:44:58.057257 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-08-29 19:44:58.057263 | orchestrator | Friday 29 August 2025 19:41:49 +0000 (0:00:03.495) 0:00:06.658 ********* 2025-08-29 19:44:58.057270 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-08-29 19:44:58.057276 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-08-29 19:44:58.057283 | orchestrator | 2025-08-29 19:44:58.057289 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-08-29 19:44:58.057296 | orchestrator | Friday 29 August 2025 19:41:56 +0000 (0:00:06.566) 0:00:13.224 ********* 2025-08-29 19:44:58.057301 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 19:44:58.057305 | orchestrator | 2025-08-29 19:44:58.057310 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-08-29 19:44:58.057314 | orchestrator | Friday 29 August 2025 19:41:59 +0000 (0:00:03.006) 0:00:16.230 ********* 2025-08-29 19:44:58.057320 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 19:44:58.057327 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-08-29 19:44:58.057334 | orchestrator | 2025-08-29 19:44:58.057340 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-08-29 19:44:58.057346 | orchestrator | Friday 29 August 2025 19:42:03 +0000 (0:00:03.887) 0:00:20.118 ********* 2025-08-29 19:44:58.057351 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 19:44:58.057356 | orchestrator | 2025-08-29 19:44:58.057363 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-08-29 19:44:58.057368 | orchestrator | Friday 29 August 2025 19:42:06 +0000 (0:00:03.505) 0:00:23.623 ********* 2025-08-29 19:44:58.057375 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-08-29 19:44:58.057381 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-08-29 19:44:58.057387 | orchestrator | 2025-08-29 19:44:58.057393 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-08-29 19:44:58.057399 | orchestrator | Friday 29 August 2025 19:42:14 +0000 (0:00:08.049) 0:00:31.673 ********* 2025-08-29 19:44:58.057409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 19:44:58.057439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 19:44:58.057444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 19:44:58.057449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.057456 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.057468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.057481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.057560 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.057574 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.057581 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.057588 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.057674 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.057681 | orchestrator | 2025-08-29 19:44:58.057690 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-08-29 19:44:58.057724 | orchestrator | Friday 29 August 2025 19:42:17 +0000 (0:00:03.008) 0:00:34.681 ********* 2025-08-29 19:44:58.057729 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:44:58.057735 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:44:58.057739 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:44:58.057744 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:44:58.057749 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:44:58.057998 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:44:58.058099 | orchestrator | 2025-08-29 19:44:58.058125 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-08-29 19:44:58.058132 | orchestrator | Friday 29 August 2025 19:42:18 +0000 (0:00:00.587) 0:00:35.268 ********* 2025-08-29 19:44:58.058138 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:44:58.058142 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:44:58.058146 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:44:58.058150 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:44:58.058184 | orchestrator | 2025-08-29 19:44:58.058189 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-08-29 19:44:58.058193 | orchestrator | Friday 29 August 2025 19:42:19 +0000 (0:00:00.987) 0:00:36.256 ********* 2025-08-29 19:44:58.058197 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-08-29 19:44:58.058202 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-08-29 19:44:58.058206 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-08-29 19:44:58.058209 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-08-29 19:44:58.058213 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-08-29 19:44:58.058217 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-08-29 19:44:58.058221 | orchestrator | 2025-08-29 19:44:58.058225 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-08-29 19:44:58.058229 | orchestrator | Friday 29 August 2025 19:42:21 +0000 (0:00:01.753) 0:00:38.009 ********* 2025-08-29 19:44:58.058693 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 19:44:58.058716 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 19:44:58.058722 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 19:44:58.058736 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 19:44:58.058740 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 19:44:58.058745 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 19:44:58.058752 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 19:44:58.058757 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 19:44:58.058769 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 19:44:58.058778 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 19:44:58.058795 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 19:44:58.058807 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 19:44:58.058817 | orchestrator | 2025-08-29 19:44:58.058824 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-08-29 19:44:58.058830 | orchestrator | Friday 29 August 2025 19:42:24 +0000 (0:00:03.472) 0:00:41.482 ********* 2025-08-29 19:44:58.058837 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 19:44:58.058844 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 19:44:58.058851 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 19:44:58.058857 | orchestrator | 2025-08-29 19:44:58.058863 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-08-29 19:44:58.058870 | orchestrator | Friday 29 August 2025 19:42:26 +0000 (0:00:02.150) 0:00:43.632 ********* 2025-08-29 19:44:58.058876 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-08-29 19:44:58.058882 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-08-29 19:44:58.058888 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-08-29 19:44:58.058892 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 19:44:58.058895 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 19:44:58.058903 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 19:44:58.058907 | orchestrator | 2025-08-29 19:44:58.058911 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-08-29 19:44:58.058915 | orchestrator | Friday 29 August 2025 19:42:30 +0000 (0:00:03.512) 0:00:47.145 ********* 2025-08-29 19:44:58.058919 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-08-29 19:44:58.058923 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-08-29 19:44:58.058927 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-08-29 19:44:58.058930 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-08-29 19:44:58.058934 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-08-29 19:44:58.058938 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-08-29 19:44:58.058942 | orchestrator | 2025-08-29 19:44:58.058946 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-08-29 19:44:58.058949 | orchestrator | Friday 29 August 2025 19:42:31 +0000 (0:00:01.268) 0:00:48.413 ********* 2025-08-29 19:44:58.058953 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:44:58.058957 | orchestrator | 2025-08-29 19:44:58.058961 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-08-29 19:44:58.058965 | orchestrator | Friday 29 August 2025 19:42:31 +0000 (0:00:00.231) 0:00:48.645 ********* 2025-08-29 19:44:58.058973 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:44:58.058977 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:44:58.058980 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:44:58.058984 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:44:58.058988 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:44:58.058992 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:44:58.058995 | orchestrator | 2025-08-29 19:44:58.058999 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-08-29 19:44:58.059003 | orchestrator | Friday 29 August 2025 19:42:32 +0000 (0:00:00.770) 0:00:49.415 ********* 2025-08-29 19:44:58.059008 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:44:58.059013 | orchestrator | 2025-08-29 19:44:58.059016 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-08-29 19:44:58.059020 | orchestrator | Friday 29 August 2025 19:42:33 +0000 (0:00:00.906) 0:00:50.322 ********* 2025-08-29 19:44:58.059027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 19:44:58.059032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 19:44:58.059040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 19:44:58.059045 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.059052 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.059062 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.059066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.059071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.059079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.059087 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.059092 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.059099 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.059150 | orchestrator | 2025-08-29 19:44:58.059154 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-08-29 19:44:58.059158 | orchestrator | Friday 29 August 2025 19:42:36 +0000 (0:00:03.095) 0:00:53.418 ********* 2025-08-29 19:44:58.059162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 19:44:58.059169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 19:44:58.059177 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:44:58.059181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 19:44:58.059188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 19:44:58.059196 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:44:58.059210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 19:44:58.059216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 19:44:58.059223 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:44:58.059229 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 19:44:58.059244 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 19:44:58.059251 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:44:58.059258 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 19:44:58.059269 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 19:44:58.059277 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:44:58.059282 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 19:44:58.059286 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 19:44:58.059294 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:44:58.059298 | orchestrator | 2025-08-29 19:44:58.059302 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-08-29 19:44:58.059306 | orchestrator | Friday 29 August 2025 19:42:38 +0000 (0:00:01.648) 0:00:55.066 ********* 2025-08-29 19:44:58.059314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 19:44:58.059318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 19:44:58.059325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 19:44:58.059330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 19:44:58.059334 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:44:58.059339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 19:44:58.059350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 19:44:58.059354 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:44:58.059359 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:44:58.059363 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 19:44:58.059368 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 19:44:58.059372 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:44:58.059379 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 19:44:58.059384 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 19:44:58.059395 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 19:44:58.059400 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:44:58.059404 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 19:44:58.059409 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:44:58.059413 | orchestrator | 2025-08-29 19:44:58.059417 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-08-29 19:44:58.059422 | orchestrator | Friday 29 August 2025 19:42:39 +0000 (0:00:01.502) 0:00:56.568 ********* 2025-08-29 19:44:58.059429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 19:44:58.059434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 19:44:58.059442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 19:44:58.059449 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.059454 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.059461 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.059467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.059482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.059490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.059503 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.059512 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.059519 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.059525 | orchestrator | 2025-08-29 19:44:58.059536 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-08-29 19:44:58.059543 | orchestrator | Friday 29 August 2025 19:42:42 +0000 (0:00:02.940) 0:00:59.509 ********* 2025-08-29 19:44:58.059549 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-08-29 19:44:58.059555 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:44:58.059559 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-08-29 19:44:58.059568 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:44:58.059572 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-08-29 19:44:58.059575 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:44:58.059579 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-08-29 19:44:58.059583 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-08-29 19:44:58.059587 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-08-29 19:44:58.059590 | orchestrator | 2025-08-29 19:44:58.059594 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-08-29 19:44:58.059598 | orchestrator | Friday 29 August 2025 19:42:44 +0000 (0:00:02.214) 0:01:01.724 ********* 2025-08-29 19:44:58.059602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 19:44:58.059609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 19:44:58.059614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 19:44:58.059623 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.059638 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.059649 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.059655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.059661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.059667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.059676 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.059687 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.059694 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.059700 | orchestrator | 2025-08-29 19:44:58.059705 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-08-29 19:44:58.059711 | orchestrator | Friday 29 August 2025 19:42:54 +0000 (0:00:10.126) 0:01:11.851 ********* 2025-08-29 19:44:58.059720 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:44:58.059726 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:44:58.059732 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:44:58.059738 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:44:58.059744 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:44:58.059750 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:44:58.059756 | orchestrator | 2025-08-29 19:44:58.059763 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-08-29 19:44:58.059769 | orchestrator | Friday 29 August 2025 19:42:57 +0000 (0:00:02.688) 0:01:14.539 ********* 2025-08-29 19:44:58.059774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 19:44:58.059781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 19:44:58.059795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 19:44:58.059802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 19:44:58.059809 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:44:58.059820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 19:44:58.059825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 19:44:58.059829 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:44:58.059832 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:44:58.059836 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 19:44:58.059847 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 19:44:58.059851 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:44:58.059855 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 19:44:58.059859 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 19:44:58.059863 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:44:58.059911 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 19:44:58.059923 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 19:44:58.059936 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:44:58.059941 | orchestrator | 2025-08-29 19:44:58.059947 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-08-29 19:44:58.059953 | orchestrator | Friday 29 August 2025 19:42:59 +0000 (0:00:01.758) 0:01:16.298 ********* 2025-08-29 19:44:58.059958 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:44:58.059964 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:44:58.059970 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:44:58.059976 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:44:58.059982 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:44:58.059988 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:44:58.059994 | orchestrator | 2025-08-29 19:44:58.060001 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-08-29 19:44:58.060011 | orchestrator | Friday 29 August 2025 19:43:00 +0000 (0:00:00.947) 0:01:17.245 ********* 2025-08-29 19:44:58.060017 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 19:44:58.060024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 19:44:58.060036 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.060046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 19:44:58.060053 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.060057 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.060061 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.060071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.060081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.060085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.060092 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.060096 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 19:44:58.060100 | orchestrator | 2025-08-29 19:44:58.060140 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-08-29 19:44:58.060144 | orchestrator | Friday 29 August 2025 19:43:03 +0000 (0:00:03.090) 0:01:20.336 ********* 2025-08-29 19:44:58.060148 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:44:58.060152 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:44:58.060156 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:44:58.060160 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:44:58.060163 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:44:58.060167 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:44:58.060171 | orchestrator | 2025-08-29 19:44:58.060175 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-08-29 19:44:58.060179 | orchestrator | Friday 29 August 2025 19:43:04 +0000 (0:00:00.911) 0:01:21.248 ********* 2025-08-29 19:44:58.060182 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:44:58.060186 | orchestrator | 2025-08-29 19:44:58.060190 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-08-29 19:44:58.060194 | orchestrator | Friday 29 August 2025 19:43:06 +0000 (0:00:02.338) 0:01:23.586 ********* 2025-08-29 19:44:58.060197 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:44:58.060201 | orchestrator | 2025-08-29 19:44:58.060205 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-08-29 19:44:58.060212 | orchestrator | Friday 29 August 2025 19:43:08 +0000 (0:00:02.221) 0:01:25.808 ********* 2025-08-29 19:44:58.060216 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:44:58.060220 | orchestrator | 2025-08-29 19:44:58.060224 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 19:44:58.060227 | orchestrator | Friday 29 August 2025 19:43:30 +0000 (0:00:21.398) 0:01:47.206 ********* 2025-08-29 19:44:58.060231 | orchestrator | 2025-08-29 19:44:58.060237 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 19:44:58.060241 | orchestrator | Friday 29 August 2025 19:43:30 +0000 (0:00:00.061) 0:01:47.268 ********* 2025-08-29 19:44:58.060245 | orchestrator | 2025-08-29 19:44:58.060249 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 19:44:58.060253 | orchestrator | Friday 29 August 2025 19:43:30 +0000 (0:00:00.060) 0:01:47.328 ********* 2025-08-29 19:44:58.060256 | orchestrator | 2025-08-29 19:44:58.060260 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 19:44:58.060264 | orchestrator | Friday 29 August 2025 19:43:30 +0000 (0:00:00.067) 0:01:47.396 ********* 2025-08-29 19:44:58.060268 | orchestrator | 2025-08-29 19:44:58.060271 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 19:44:58.060275 | orchestrator | Friday 29 August 2025 19:43:30 +0000 (0:00:00.089) 0:01:47.485 ********* 2025-08-29 19:44:58.060279 | orchestrator | 2025-08-29 19:44:58.060282 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 19:44:58.060286 | orchestrator | Friday 29 August 2025 19:43:30 +0000 (0:00:00.062) 0:01:47.548 ********* 2025-08-29 19:44:58.060290 | orchestrator | 2025-08-29 19:44:58.060294 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-08-29 19:44:58.060297 | orchestrator | Friday 29 August 2025 19:43:30 +0000 (0:00:00.061) 0:01:47.609 ********* 2025-08-29 19:44:58.060301 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:44:58.060305 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:44:58.060308 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:44:58.060312 | orchestrator | 2025-08-29 19:44:58.060316 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-08-29 19:44:58.060319 | orchestrator | Friday 29 August 2025 19:43:58 +0000 (0:00:28.246) 0:02:15.855 ********* 2025-08-29 19:44:58.060323 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:44:58.060327 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:44:58.060331 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:44:58.060334 | orchestrator | 2025-08-29 19:44:58.060338 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-08-29 19:44:58.060342 | orchestrator | Friday 29 August 2025 19:44:06 +0000 (0:00:07.877) 0:02:23.732 ********* 2025-08-29 19:44:58.060345 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:44:58.060349 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:44:58.060353 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:44:58.060357 | orchestrator | 2025-08-29 19:44:58.060360 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-08-29 19:44:58.060364 | orchestrator | Friday 29 August 2025 19:44:45 +0000 (0:00:38.596) 0:03:02.329 ********* 2025-08-29 19:44:58.060368 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:44:58.060371 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:44:58.060375 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:44:58.060379 | orchestrator | 2025-08-29 19:44:58.060383 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-08-29 19:44:58.060390 | orchestrator | Friday 29 August 2025 19:44:55 +0000 (0:00:10.312) 0:03:12.641 ********* 2025-08-29 19:44:58.060393 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:44:58.060397 | orchestrator | 2025-08-29 19:44:58.060401 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:44:58.060405 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 19:44:58.060412 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-08-29 19:44:58.060416 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-08-29 19:44:58.060420 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-08-29 19:44:58.060424 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-08-29 19:44:58.060427 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-08-29 19:44:58.060431 | orchestrator | 2025-08-29 19:44:58.060435 | orchestrator | 2025-08-29 19:44:58.060439 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:44:58.060442 | orchestrator | Friday 29 August 2025 19:44:56 +0000 (0:00:00.554) 0:03:13.195 ********* 2025-08-29 19:44:58.060446 | orchestrator | =============================================================================== 2025-08-29 19:44:58.060450 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 38.60s 2025-08-29 19:44:58.060454 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 28.25s 2025-08-29 19:44:58.060457 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 21.40s 2025-08-29 19:44:58.060461 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.31s 2025-08-29 19:44:58.060465 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.13s 2025-08-29 19:44:58.060468 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.05s 2025-08-29 19:44:58.060472 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 7.88s 2025-08-29 19:44:58.060476 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.57s 2025-08-29 19:44:58.060481 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.89s 2025-08-29 19:44:58.060485 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.51s 2025-08-29 19:44:58.060489 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.51s 2025-08-29 19:44:58.060493 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.50s 2025-08-29 19:44:58.060496 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.47s 2025-08-29 19:44:58.060500 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.10s 2025-08-29 19:44:58.060504 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.09s 2025-08-29 19:44:58.060508 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.01s 2025-08-29 19:44:58.060511 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.01s 2025-08-29 19:44:58.060515 | orchestrator | cinder : Copying over config.json files for services -------------------- 2.94s 2025-08-29 19:44:58.060519 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 2.69s 2025-08-29 19:44:58.060522 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.34s 2025-08-29 19:44:58.060526 | orchestrator | 2025-08-29 19:44:58 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:44:58.060530 | orchestrator | 2025-08-29 19:44:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:45:01.098553 | orchestrator | 2025-08-29 19:45:01 | INFO  | Task d61875ec-353a-4544-8f79-f332627f9f09 is in state STARTED 2025-08-29 19:45:01.101856 | orchestrator | 2025-08-29 19:45:01 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:45:01.105310 | orchestrator | 2025-08-29 19:45:01 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:45:01.107957 | orchestrator | 2025-08-29 19:45:01 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:45:01.108798 | orchestrator | 2025-08-29 19:45:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:45:04.148796 | orchestrator | 2025-08-29 19:45:04 | INFO  | Task d61875ec-353a-4544-8f79-f332627f9f09 is in state STARTED 2025-08-29 19:45:04.150928 | orchestrator | 2025-08-29 19:45:04 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:45:04.152624 | orchestrator | 2025-08-29 19:45:04 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:45:04.154241 | orchestrator | 2025-08-29 19:45:04 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:45:04.154292 | orchestrator | 2025-08-29 19:45:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:45:07.195635 | orchestrator | 2025-08-29 19:45:07 | INFO  | Task d61875ec-353a-4544-8f79-f332627f9f09 is in state STARTED 2025-08-29 19:45:07.195736 | orchestrator | 2025-08-29 19:45:07 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:45:07.196844 | orchestrator | 2025-08-29 19:45:07 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:45:07.197410 | orchestrator | 2025-08-29 19:45:07 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:45:07.197438 | orchestrator | 2025-08-29 19:45:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:45:10.245074 | orchestrator | 2025-08-29 19:45:10 | INFO  | Task d61875ec-353a-4544-8f79-f332627f9f09 is in state STARTED 2025-08-29 19:45:10.245863 | orchestrator | 2025-08-29 19:45:10 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state STARTED 2025-08-29 19:45:10.248829 | orchestrator | 2025-08-29 19:45:10 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:45:10.251484 | orchestrator | 2025-08-29 19:45:10 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:45:10.251661 | orchestrator | 2025-08-29 19:45:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:45:13.293494 | orchestrator | 2025-08-29 19:45:13 | INFO  | Task d61875ec-353a-4544-8f79-f332627f9f09 is in state STARTED 2025-08-29 19:45:13.298142 | orchestrator | 2025-08-29 19:45:13 | INFO  | Task cb3c7d90-1e56-436c-b5f4-e5c8ba2e0c60 is in state SUCCESS 2025-08-29 19:45:13.300629 | orchestrator | 2025-08-29 19:45:13.300670 | orchestrator | 2025-08-29 19:45:13.300680 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 19:45:13.300688 | orchestrator | 2025-08-29 19:45:13.300697 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 19:45:13.300705 | orchestrator | Friday 29 August 2025 19:42:50 +0000 (0:00:00.260) 0:00:00.260 ********* 2025-08-29 19:45:13.300713 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:45:13.300723 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:45:13.300731 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:45:13.300738 | orchestrator | 2025-08-29 19:45:13.300746 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 19:45:13.300754 | orchestrator | Friday 29 August 2025 19:42:50 +0000 (0:00:00.437) 0:00:00.698 ********* 2025-08-29 19:45:13.300762 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-08-29 19:45:13.300770 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-08-29 19:45:13.300778 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-08-29 19:45:13.300786 | orchestrator | 2025-08-29 19:45:13.300818 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-08-29 19:45:13.300826 | orchestrator | 2025-08-29 19:45:13.300834 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-08-29 19:45:13.300841 | orchestrator | Friday 29 August 2025 19:42:51 +0000 (0:00:00.752) 0:00:01.450 ********* 2025-08-29 19:45:13.300850 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:45:13.300858 | orchestrator | 2025-08-29 19:45:13.300867 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-08-29 19:45:13.300874 | orchestrator | Friday 29 August 2025 19:42:52 +0000 (0:00:01.032) 0:00:02.483 ********* 2025-08-29 19:45:13.300887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 19:45:13.300913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 19:45:13.300921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 19:45:13.300929 | orchestrator | 2025-08-29 19:45:13.300937 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-08-29 19:45:13.301019 | orchestrator | Friday 29 August 2025 19:42:53 +0000 (0:00:00.986) 0:00:03.469 ********* 2025-08-29 19:45:13.301029 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-08-29 19:45:13.301038 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-08-29 19:45:13.301066 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 19:45:13.301075 | orchestrator | 2025-08-29 19:45:13.301346 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-08-29 19:45:13.301357 | orchestrator | Friday 29 August 2025 19:42:54 +0000 (0:00:00.781) 0:00:04.251 ********* 2025-08-29 19:45:13.301363 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:45:13.301368 | orchestrator | 2025-08-29 19:45:13.301373 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-08-29 19:45:13.301413 | orchestrator | Friday 29 August 2025 19:42:55 +0000 (0:00:00.727) 0:00:04.979 ********* 2025-08-29 19:45:13.301435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 19:45:13.301459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 19:45:13.301468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 19:45:13.301476 | orchestrator | 2025-08-29 19:45:13.301484 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-08-29 19:45:13.301492 | orchestrator | Friday 29 August 2025 19:42:56 +0000 (0:00:01.739) 0:00:06.718 ********* 2025-08-29 19:45:13.301704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 19:45:13.301715 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:45:13.301724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 19:45:13.301732 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:45:13.301766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 19:45:13.301784 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:45:13.301792 | orchestrator | 2025-08-29 19:45:13.301799 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-08-29 19:45:13.301807 | orchestrator | Friday 29 August 2025 19:42:57 +0000 (0:00:00.478) 0:00:07.196 ********* 2025-08-29 19:45:13.301815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 19:45:13.301824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 19:45:13.301832 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:45:13.301840 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:45:13.301848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 19:45:13.301856 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:45:13.301863 | orchestrator | 2025-08-29 19:45:13.301875 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-08-29 19:45:13.301883 | orchestrator | Friday 29 August 2025 19:42:58 +0000 (0:00:01.026) 0:00:08.222 ********* 2025-08-29 19:45:13.301891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 19:45:13.301899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 19:45:13.301934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 19:45:13.301943 | orchestrator | 2025-08-29 19:45:13.301950 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-08-29 19:45:13.301958 | orchestrator | Friday 29 August 2025 19:42:59 +0000 (0:00:01.587) 0:00:09.809 ********* 2025-08-29 19:45:13.301966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 19:45:13.301974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 19:45:13.301986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 19:45:13.301993 | orchestrator | 2025-08-29 19:45:13.302002 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-08-29 19:45:13.302009 | orchestrator | Friday 29 August 2025 19:43:01 +0000 (0:00:01.716) 0:00:11.526 ********* 2025-08-29 19:45:13.302054 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:45:13.302062 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:45:13.302169 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:45:13.302179 | orchestrator | 2025-08-29 19:45:13.302194 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-08-29 19:45:13.302202 | orchestrator | Friday 29 August 2025 19:43:02 +0000 (0:00:00.704) 0:00:12.230 ********* 2025-08-29 19:45:13.302210 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-08-29 19:45:13.302218 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-08-29 19:45:13.302226 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-08-29 19:45:13.302234 | orchestrator | 2025-08-29 19:45:13.302241 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-08-29 19:45:13.302249 | orchestrator | Friday 29 August 2025 19:43:03 +0000 (0:00:01.288) 0:00:13.518 ********* 2025-08-29 19:45:13.302257 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-08-29 19:45:13.302265 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-08-29 19:45:13.302273 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-08-29 19:45:13.302281 | orchestrator | 2025-08-29 19:45:13.302288 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-08-29 19:45:13.302296 | orchestrator | Friday 29 August 2025 19:43:05 +0000 (0:00:01.429) 0:00:14.948 ********* 2025-08-29 19:45:13.302326 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 19:45:13.302334 | orchestrator | 2025-08-29 19:45:13.302342 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-08-29 19:45:13.302350 | orchestrator | Friday 29 August 2025 19:43:05 +0000 (0:00:00.699) 0:00:15.647 ********* 2025-08-29 19:45:13.302358 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-08-29 19:45:13.302365 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-08-29 19:45:13.302373 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:45:13.302381 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:45:13.302389 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:45:13.302397 | orchestrator | 2025-08-29 19:45:13.302405 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-08-29 19:45:13.302412 | orchestrator | Friday 29 August 2025 19:43:06 +0000 (0:00:00.700) 0:00:16.348 ********* 2025-08-29 19:45:13.302421 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:45:13.302428 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:45:13.302436 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:45:13.302444 | orchestrator | 2025-08-29 19:45:13.302452 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-08-29 19:45:13.302459 | orchestrator | Friday 29 August 2025 19:43:06 +0000 (0:00:00.461) 0:00:16.810 ********* 2025-08-29 19:45:13.302468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1327002, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6236522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1327002, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6236522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1327002, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6236522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1327098, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6424048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1327098, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6424048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1327098, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6424048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1327037, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6283948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1327037, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6283948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1327037, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6283948, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1327103, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6466365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1327103, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6466365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1327103, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6466365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1327060, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6320944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1327060, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6320944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1327060, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6320944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1327075, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6368492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1327075, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6368492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1327075, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6368492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1327001, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.621504, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1327001, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.621504, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1327001, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.621504, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1327016, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6269164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1327016, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6269164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1327016, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6269164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1327041, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6290543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1327041, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6290543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1327041, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6290543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1327068, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6349297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1327068, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6349297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1327068, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6349297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1327081, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.639686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1327081, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.639686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1327081, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.639686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1327029, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.627502, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1327029, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.627502, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1327029, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.627502, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1327073, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6356084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1327073, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6356084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1327073, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6356084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1327061, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6326084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1327061, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6326084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1327061, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6326084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.302996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1327052, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6318357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1327052, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6318357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1327049, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6304066, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1327052, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6318357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1327049, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6304066, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1327070, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6355898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1327070, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6355898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1327049, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6304066, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1327046, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6297998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1327046, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6297998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1327070, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6355898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1327078, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6375296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1327078, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6375296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1327046, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6297998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1327264, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7412503, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1327264, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7412503, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1327078, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6375296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1327179, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.683609, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1327179, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.683609, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1327264, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7412503, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1327159, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6700325, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1327159, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6700325, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1327179, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.683609, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1327212, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7120748, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1327212, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7120748, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1327159, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6700325, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1327114, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6474757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1327114, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6474757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1327212, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7120748, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1327239, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7286677, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1327239, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7286677, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1327114, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6474757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1327217, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7238178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1327217, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7238178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1327239, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7286677, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1327245, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7293336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1327245, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7293336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1327217, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7238178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1327262, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7386096, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1327262, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7386096, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1327245, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7293336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1327236, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7260354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1327236, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7260354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1327262, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7386096, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1327181, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.686248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1327181, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.686248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1327176, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.678609, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1327236, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7260354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1327176, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.678609, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1327180, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.684609, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1327181, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.686248, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1327180, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.684609, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1327163, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6756089, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1327176, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.678609, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1327163, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6756089, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1327210, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7106092, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1327180, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.684609, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1327210, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7106092, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1327253, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7379577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1327163, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6756089, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1327253, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7379577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1327250, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7316096, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1327210, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7106092, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1327250, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7316096, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1327117, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.649354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1327253, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7379577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1327117, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.649354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1327125, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6692412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1327250, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7316096, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1327125, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6692412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1327233, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7251732, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1327117, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.649354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1327233, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7251732, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1327248, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7299833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1327125, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.6692412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1327248, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7299833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1327233, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7251732, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1327248, 'dev': 105, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756493694.7299833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 19:45:13.303792 | orchestrator | 2025-08-29 19:45:13.303800 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-08-29 19:45:13.303807 | orchestrator | Friday 29 August 2025 19:43:45 +0000 (0:00:38.816) 0:00:55.626 ********* 2025-08-29 19:45:13.303815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 19:45:13.303827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 19:45:13.303840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 19:45:13.303848 | orchestrator | 2025-08-29 19:45:13.303855 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-08-29 19:45:13.303862 | orchestrator | Friday 29 August 2025 19:43:46 +0000 (0:00:01.160) 0:00:56.787 ********* 2025-08-29 19:45:13.303870 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:45:13.303878 | orchestrator | 2025-08-29 19:45:13.303886 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-08-29 19:45:13.303893 | orchestrator | Friday 29 August 2025 19:43:49 +0000 (0:00:02.368) 0:00:59.155 ********* 2025-08-29 19:45:13.303900 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:45:13.303907 | orchestrator | 2025-08-29 19:45:13.303915 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-08-29 19:45:13.303922 | orchestrator | Friday 29 August 2025 19:43:51 +0000 (0:00:02.439) 0:01:01.594 ********* 2025-08-29 19:45:13.303929 | orchestrator | 2025-08-29 19:45:13.303936 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-08-29 19:45:13.303947 | orchestrator | Friday 29 August 2025 19:43:51 +0000 (0:00:00.064) 0:01:01.659 ********* 2025-08-29 19:45:13.303954 | orchestrator | 2025-08-29 19:45:13.303961 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-08-29 19:45:13.303969 | orchestrator | Friday 29 August 2025 19:43:51 +0000 (0:00:00.065) 0:01:01.725 ********* 2025-08-29 19:45:13.303976 | orchestrator | 2025-08-29 19:45:13.303983 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-08-29 19:45:13.303990 | orchestrator | Friday 29 August 2025 19:43:52 +0000 (0:00:00.235) 0:01:01.961 ********* 2025-08-29 19:45:13.303997 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:45:13.304004 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:45:13.304012 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:45:13.304019 | orchestrator | 2025-08-29 19:45:13.304026 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-08-29 19:45:13.304033 | orchestrator | Friday 29 August 2025 19:43:53 +0000 (0:00:01.832) 0:01:03.794 ********* 2025-08-29 19:45:13.304041 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:45:13.304048 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:45:13.304055 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-08-29 19:45:13.304063 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-08-29 19:45:13.304070 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-08-29 19:45:13.304077 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:45:13.304103 | orchestrator | 2025-08-29 19:45:13.304111 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-08-29 19:45:13.304118 | orchestrator | Friday 29 August 2025 19:44:32 +0000 (0:00:38.800) 0:01:42.595 ********* 2025-08-29 19:45:13.304125 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:45:13.304132 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:45:13.304139 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:45:13.304151 | orchestrator | 2025-08-29 19:45:13.304159 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-08-29 19:45:13.304166 | orchestrator | Friday 29 August 2025 19:45:04 +0000 (0:00:32.146) 0:02:14.741 ********* 2025-08-29 19:45:13.304173 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:45:13.304181 | orchestrator | 2025-08-29 19:45:13.304188 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-08-29 19:45:13.304195 | orchestrator | Friday 29 August 2025 19:45:07 +0000 (0:00:02.151) 0:02:16.892 ********* 2025-08-29 19:45:13.304203 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:45:13.304210 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:45:13.304217 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:45:13.304224 | orchestrator | 2025-08-29 19:45:13.304232 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-08-29 19:45:13.304239 | orchestrator | Friday 29 August 2025 19:45:07 +0000 (0:00:00.392) 0:02:17.285 ********* 2025-08-29 19:45:13.304247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-08-29 19:45:13.304260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-08-29 19:45:13.304268 | orchestrator | 2025-08-29 19:45:13.304276 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-08-29 19:45:13.304283 | orchestrator | Friday 29 August 2025 19:45:09 +0000 (0:00:02.370) 0:02:19.656 ********* 2025-08-29 19:45:13.304290 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:45:13.304298 | orchestrator | 2025-08-29 19:45:13.304305 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:45:13.304313 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 19:45:13.304321 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 19:45:13.304329 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 19:45:13.304336 | orchestrator | 2025-08-29 19:45:13.304343 | orchestrator | 2025-08-29 19:45:13.304351 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:45:13.304358 | orchestrator | Friday 29 August 2025 19:45:10 +0000 (0:00:00.254) 0:02:19.911 ********* 2025-08-29 19:45:13.304365 | orchestrator | =============================================================================== 2025-08-29 19:45:13.304372 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 38.82s 2025-08-29 19:45:13.304379 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.80s 2025-08-29 19:45:13.304386 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 32.15s 2025-08-29 19:45:13.304394 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.44s 2025-08-29 19:45:13.304401 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.37s 2025-08-29 19:45:13.304411 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.37s 2025-08-29 19:45:13.304419 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.15s 2025-08-29 19:45:13.304426 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.83s 2025-08-29 19:45:13.304433 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.74s 2025-08-29 19:45:13.304445 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.72s 2025-08-29 19:45:13.304453 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.59s 2025-08-29 19:45:13.304460 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.43s 2025-08-29 19:45:13.304467 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.29s 2025-08-29 19:45:13.304474 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.16s 2025-08-29 19:45:13.304481 | orchestrator | grafana : include_tasks ------------------------------------------------- 1.03s 2025-08-29 19:45:13.304489 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 1.03s 2025-08-29 19:45:13.304496 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.99s 2025-08-29 19:45:13.304504 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.78s 2025-08-29 19:45:13.304511 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.75s 2025-08-29 19:45:13.304518 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.73s 2025-08-29 19:45:13.304526 | orchestrator | 2025-08-29 19:45:13 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:45:13.304534 | orchestrator | 2025-08-29 19:45:13 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:45:13.304541 | orchestrator | 2025-08-29 19:45:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:45:16.352886 | orchestrator | 2025-08-29 19:45:16 | INFO  | Task d61875ec-353a-4544-8f79-f332627f9f09 is in state STARTED 2025-08-29 19:45:16.354811 | orchestrator | 2025-08-29 19:45:16 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:45:16.356278 | orchestrator | 2025-08-29 19:45:16 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:45:16.356323 | orchestrator | 2025-08-29 19:45:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:45:19.402580 | orchestrator | 2025-08-29 19:45:19 | INFO  | Task d61875ec-353a-4544-8f79-f332627f9f09 is in state STARTED 2025-08-29 19:45:19.404060 | orchestrator | 2025-08-29 19:45:19 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:45:19.406942 | orchestrator | 2025-08-29 19:45:19 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:45:19.407012 | orchestrator | 2025-08-29 19:45:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:45:22.459879 | orchestrator | 2025-08-29 19:45:22 | INFO  | Task d61875ec-353a-4544-8f79-f332627f9f09 is in state STARTED 2025-08-29 19:45:22.461614 | orchestrator | 2025-08-29 19:45:22 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:45:22.463019 | orchestrator | 2025-08-29 19:45:22 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:45:22.463101 | orchestrator | 2025-08-29 19:45:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:45:25.498283 | orchestrator | 2025-08-29 19:45:25 | INFO  | Task d61875ec-353a-4544-8f79-f332627f9f09 is in state STARTED 2025-08-29 19:45:25.499195 | orchestrator | 2025-08-29 19:45:25 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:45:25.500583 | orchestrator | 2025-08-29 19:45:25 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:45:25.500630 | orchestrator | 2025-08-29 19:45:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:45:28.546182 | orchestrator | 2025-08-29 19:45:28 | INFO  | Task d61875ec-353a-4544-8f79-f332627f9f09 is in state STARTED 2025-08-29 19:45:28.546324 | orchestrator | 2025-08-29 19:45:28 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:45:28.549274 | orchestrator | 2025-08-29 19:45:28 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:45:28.549382 | orchestrator | 2025-08-29 19:45:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:45:31.597223 | orchestrator | 2025-08-29 19:45:31 | INFO  | Task d61875ec-353a-4544-8f79-f332627f9f09 is in state STARTED 2025-08-29 19:45:31.599048 | orchestrator | 2025-08-29 19:45:31 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:45:31.601758 | orchestrator | 2025-08-29 19:45:31 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:45:31.601802 | orchestrator | 2025-08-29 19:45:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:45:34.637256 | orchestrator | 2025-08-29 19:45:34 | INFO  | Task d61875ec-353a-4544-8f79-f332627f9f09 is in state STARTED 2025-08-29 19:45:34.638379 | orchestrator | 2025-08-29 19:45:34 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:45:34.638469 | orchestrator | 2025-08-29 19:45:34 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:45:34.638485 | orchestrator | 2025-08-29 19:45:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:45:37.679332 | orchestrator | 2025-08-29 19:45:37 | INFO  | Task d61875ec-353a-4544-8f79-f332627f9f09 is in state STARTED 2025-08-29 19:45:37.681448 | orchestrator | 2025-08-29 19:45:37 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:45:37.683335 | orchestrator | 2025-08-29 19:45:37 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:45:37.683372 | orchestrator | 2025-08-29 19:45:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:45:40.730826 | orchestrator | 2025-08-29 19:45:40 | INFO  | Task d61875ec-353a-4544-8f79-f332627f9f09 is in state STARTED 2025-08-29 19:45:40.732742 | orchestrator | 2025-08-29 19:45:40 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:45:40.734556 | orchestrator | 2025-08-29 19:45:40 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:45:40.734633 | orchestrator | 2025-08-29 19:45:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:45:43.786227 | orchestrator | 2025-08-29 19:45:43 | INFO  | Task d61875ec-353a-4544-8f79-f332627f9f09 is in state STARTED 2025-08-29 19:45:43.787782 | orchestrator | 2025-08-29 19:45:43 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:45:43.790123 | orchestrator | 2025-08-29 19:45:43 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:45:43.790186 | orchestrator | 2025-08-29 19:45:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:45:46.835368 | orchestrator | 2025-08-29 19:45:46 | INFO  | Task d61875ec-353a-4544-8f79-f332627f9f09 is in state STARTED 2025-08-29 19:45:46.837369 | orchestrator | 2025-08-29 19:45:46 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:45:46.840225 | orchestrator | 2025-08-29 19:45:46 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:45:46.840247 | orchestrator | 2025-08-29 19:45:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:45:49.884451 | orchestrator | 2025-08-29 19:45:49 | INFO  | Task d61875ec-353a-4544-8f79-f332627f9f09 is in state STARTED 2025-08-29 19:45:49.885713 | orchestrator | 2025-08-29 19:45:49 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:45:49.888715 | orchestrator | 2025-08-29 19:45:49 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:45:49.888805 | orchestrator | 2025-08-29 19:45:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:45:52.926475 | orchestrator | 2025-08-29 19:45:52 | INFO  | Task d61875ec-353a-4544-8f79-f332627f9f09 is in state STARTED 2025-08-29 19:45:52.928871 | orchestrator | 2025-08-29 19:45:52 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:45:52.931434 | orchestrator | 2025-08-29 19:45:52 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:45:52.931586 | orchestrator | 2025-08-29 19:45:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:45:55.977086 | orchestrator | 2025-08-29 19:45:55 | INFO  | Task d61875ec-353a-4544-8f79-f332627f9f09 is in state SUCCESS 2025-08-29 19:45:55.978163 | orchestrator | 2025-08-29 19:45:55 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:45:55.979966 | orchestrator | 2025-08-29 19:45:55 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:45:55.980007 | orchestrator | 2025-08-29 19:45:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:45:59.026313 | orchestrator | 2025-08-29 19:45:59 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:45:59.028868 | orchestrator | 2025-08-29 19:45:59 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:45:59.028930 | orchestrator | 2025-08-29 19:45:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:46:02.077920 | orchestrator | 2025-08-29 19:46:02 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:46:02.079576 | orchestrator | 2025-08-29 19:46:02 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:46:02.079599 | orchestrator | 2025-08-29 19:46:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:46:05.122305 | orchestrator | 2025-08-29 19:46:05 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:46:05.123358 | orchestrator | 2025-08-29 19:46:05 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:46:05.123437 | orchestrator | 2025-08-29 19:46:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:46:08.156714 | orchestrator | 2025-08-29 19:46:08 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:46:08.156890 | orchestrator | 2025-08-29 19:46:08 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:46:08.156905 | orchestrator | 2025-08-29 19:46:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:46:11.194315 | orchestrator | 2025-08-29 19:46:11 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:46:11.196232 | orchestrator | 2025-08-29 19:46:11 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:46:11.196294 | orchestrator | 2025-08-29 19:46:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:46:14.233139 | orchestrator | 2025-08-29 19:46:14 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:46:14.233881 | orchestrator | 2025-08-29 19:46:14 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:46:14.233933 | orchestrator | 2025-08-29 19:46:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:46:17.277860 | orchestrator | 2025-08-29 19:46:17 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:46:17.280712 | orchestrator | 2025-08-29 19:46:17 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:46:17.280776 | orchestrator | 2025-08-29 19:46:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:46:20.330944 | orchestrator | 2025-08-29 19:46:20 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:46:20.332245 | orchestrator | 2025-08-29 19:46:20 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:46:20.332275 | orchestrator | 2025-08-29 19:46:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:46:23.380533 | orchestrator | 2025-08-29 19:46:23 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:46:23.383183 | orchestrator | 2025-08-29 19:46:23 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:46:23.383230 | orchestrator | 2025-08-29 19:46:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:46:26.429912 | orchestrator | 2025-08-29 19:46:26 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:46:26.430635 | orchestrator | 2025-08-29 19:46:26 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:46:26.430669 | orchestrator | 2025-08-29 19:46:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:46:29.477062 | orchestrator | 2025-08-29 19:46:29 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:46:29.478358 | orchestrator | 2025-08-29 19:46:29 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:46:29.478411 | orchestrator | 2025-08-29 19:46:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:46:32.525242 | orchestrator | 2025-08-29 19:46:32 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:46:32.527028 | orchestrator | 2025-08-29 19:46:32 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:46:32.527084 | orchestrator | 2025-08-29 19:46:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:46:35.577777 | orchestrator | 2025-08-29 19:46:35 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:46:35.580683 | orchestrator | 2025-08-29 19:46:35 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:46:35.580754 | orchestrator | 2025-08-29 19:46:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:46:38.621725 | orchestrator | 2025-08-29 19:46:38 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:46:38.623682 | orchestrator | 2025-08-29 19:46:38 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:46:38.623705 | orchestrator | 2025-08-29 19:46:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:46:41.663008 | orchestrator | 2025-08-29 19:46:41 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:46:41.665179 | orchestrator | 2025-08-29 19:46:41 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:46:41.665229 | orchestrator | 2025-08-29 19:46:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:46:44.714121 | orchestrator | 2025-08-29 19:46:44 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:46:44.715465 | orchestrator | 2025-08-29 19:46:44 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state STARTED 2025-08-29 19:46:44.715501 | orchestrator | 2025-08-29 19:46:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:46:47.749069 | orchestrator | 2025-08-29 19:46:47 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:46:47.749591 | orchestrator | 2025-08-29 19:46:47 | INFO  | Task 28c90b05-02e1-48d4-9346-6684c493e7ef is in state SUCCESS 2025-08-29 19:46:47.749728 | orchestrator | 2025-08-29 19:46:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:46:50.797150 | orchestrator | 2025-08-29 19:46:50 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:46:50.797228 | orchestrator | 2025-08-29 19:46:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:46:53.837894 | orchestrator | 2025-08-29 19:46:53 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:46:53.838053 | orchestrator | 2025-08-29 19:46:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:46:56.878985 | orchestrator | 2025-08-29 19:46:56 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:46:56.879067 | orchestrator | 2025-08-29 19:46:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:46:59.930563 | orchestrator | 2025-08-29 19:46:59 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:46:59.931177 | orchestrator | 2025-08-29 19:46:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:47:02.962735 | orchestrator | 2025-08-29 19:47:02 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:47:02.963857 | orchestrator | 2025-08-29 19:47:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:47:06.093694 | orchestrator | 2025-08-29 19:47:06 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:47:06.093804 | orchestrator | 2025-08-29 19:47:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:47:09.143549 | orchestrator | 2025-08-29 19:47:09 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:47:09.143645 | orchestrator | 2025-08-29 19:47:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:47:12.176350 | orchestrator | 2025-08-29 19:47:12 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:47:12.176443 | orchestrator | 2025-08-29 19:47:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:47:15.215833 | orchestrator | 2025-08-29 19:47:15 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:47:15.216107 | orchestrator | 2025-08-29 19:47:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:47:18.262807 | orchestrator | 2025-08-29 19:47:18 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:47:18.262927 | orchestrator | 2025-08-29 19:47:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:47:21.292111 | orchestrator | 2025-08-29 19:47:21 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:47:21.292222 | orchestrator | 2025-08-29 19:47:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:47:24.343281 | orchestrator | 2025-08-29 19:47:24 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:47:24.345139 | orchestrator | 2025-08-29 19:47:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:47:27.386562 | orchestrator | 2025-08-29 19:47:27 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:47:27.386661 | orchestrator | 2025-08-29 19:47:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:47:30.420305 | orchestrator | 2025-08-29 19:47:30 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:47:30.420422 | orchestrator | 2025-08-29 19:47:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:47:33.452562 | orchestrator | 2025-08-29 19:47:33 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:47:33.452642 | orchestrator | 2025-08-29 19:47:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:47:36.507047 | orchestrator | 2025-08-29 19:47:36 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:47:36.507152 | orchestrator | 2025-08-29 19:47:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:47:39.547525 | orchestrator | 2025-08-29 19:47:39 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:47:39.548402 | orchestrator | 2025-08-29 19:47:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:47:42.585137 | orchestrator | 2025-08-29 19:47:42 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:47:42.585268 | orchestrator | 2025-08-29 19:47:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:47:45.620965 | orchestrator | 2025-08-29 19:47:45 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:47:45.621041 | orchestrator | 2025-08-29 19:47:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:47:48.662505 | orchestrator | 2025-08-29 19:47:48 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:47:48.662593 | orchestrator | 2025-08-29 19:47:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:47:51.708538 | orchestrator | 2025-08-29 19:47:51 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:47:51.708665 | orchestrator | 2025-08-29 19:47:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:47:54.761056 | orchestrator | 2025-08-29 19:47:54 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:47:54.761159 | orchestrator | 2025-08-29 19:47:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:47:57.802305 | orchestrator | 2025-08-29 19:47:57 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:47:57.802408 | orchestrator | 2025-08-29 19:47:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:48:00.834847 | orchestrator | 2025-08-29 19:48:00 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:48:00.834954 | orchestrator | 2025-08-29 19:48:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:48:03.885352 | orchestrator | 2025-08-29 19:48:03 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:48:03.885475 | orchestrator | 2025-08-29 19:48:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:48:06.936028 | orchestrator | 2025-08-29 19:48:06 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:48:06.936134 | orchestrator | 2025-08-29 19:48:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:48:09.983514 | orchestrator | 2025-08-29 19:48:09 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:48:09.983624 | orchestrator | 2025-08-29 19:48:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:48:13.027080 | orchestrator | 2025-08-29 19:48:13 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:48:13.027170 | orchestrator | 2025-08-29 19:48:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:48:16.080188 | orchestrator | 2025-08-29 19:48:16 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:48:16.080309 | orchestrator | 2025-08-29 19:48:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:48:19.127457 | orchestrator | 2025-08-29 19:48:19 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:48:19.127578 | orchestrator | 2025-08-29 19:48:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:48:22.164926 | orchestrator | 2025-08-29 19:48:22 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:48:22.165027 | orchestrator | 2025-08-29 19:48:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:48:25.221339 | orchestrator | 2025-08-29 19:48:25 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:48:25.221442 | orchestrator | 2025-08-29 19:48:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:48:28.268856 | orchestrator | 2025-08-29 19:48:28 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:48:28.268957 | orchestrator | 2025-08-29 19:48:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:48:31.318386 | orchestrator | 2025-08-29 19:48:31 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:48:31.318477 | orchestrator | 2025-08-29 19:48:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:48:34.373940 | orchestrator | 2025-08-29 19:48:34 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:48:34.374121 | orchestrator | 2025-08-29 19:48:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:48:37.429718 | orchestrator | 2025-08-29 19:48:37 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:48:37.429890 | orchestrator | 2025-08-29 19:48:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:48:40.475645 | orchestrator | 2025-08-29 19:48:40 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:48:40.475836 | orchestrator | 2025-08-29 19:48:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:48:43.520131 | orchestrator | 2025-08-29 19:48:43 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:48:43.520251 | orchestrator | 2025-08-29 19:48:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:48:46.558619 | orchestrator | 2025-08-29 19:48:46 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:48:46.558693 | orchestrator | 2025-08-29 19:48:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:48:49.601296 | orchestrator | 2025-08-29 19:48:49 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:48:49.601375 | orchestrator | 2025-08-29 19:48:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:48:52.643366 | orchestrator | 2025-08-29 19:48:52 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:48:52.643457 | orchestrator | 2025-08-29 19:48:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:48:55.696356 | orchestrator | 2025-08-29 19:48:55 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:48:55.696446 | orchestrator | 2025-08-29 19:48:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:48:58.747071 | orchestrator | 2025-08-29 19:48:58 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:48:58.747147 | orchestrator | 2025-08-29 19:48:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:49:01.788140 | orchestrator | 2025-08-29 19:49:01 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:49:01.788226 | orchestrator | 2025-08-29 19:49:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:49:04.831276 | orchestrator | 2025-08-29 19:49:04 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:49:04.831392 | orchestrator | 2025-08-29 19:49:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:49:07.881054 | orchestrator | 2025-08-29 19:49:07 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:49:07.881166 | orchestrator | 2025-08-29 19:49:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:49:10.931835 | orchestrator | 2025-08-29 19:49:10 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:49:10.931916 | orchestrator | 2025-08-29 19:49:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:49:13.971914 | orchestrator | 2025-08-29 19:49:13 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:49:13.972019 | orchestrator | 2025-08-29 19:49:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:49:17.019528 | orchestrator | 2025-08-29 19:49:17 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:49:17.019604 | orchestrator | 2025-08-29 19:49:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:49:20.065345 | orchestrator | 2025-08-29 19:49:20 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:49:20.065449 | orchestrator | 2025-08-29 19:49:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:49:23.114416 | orchestrator | 2025-08-29 19:49:23 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:49:23.114548 | orchestrator | 2025-08-29 19:49:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:49:26.156957 | orchestrator | 2025-08-29 19:49:26 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:49:26.157049 | orchestrator | 2025-08-29 19:49:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:49:29.205169 | orchestrator | 2025-08-29 19:49:29 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:49:29.205267 | orchestrator | 2025-08-29 19:49:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:49:32.249042 | orchestrator | 2025-08-29 19:49:32 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:49:32.249124 | orchestrator | 2025-08-29 19:49:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:49:35.296143 | orchestrator | 2025-08-29 19:49:35 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:49:35.296228 | orchestrator | 2025-08-29 19:49:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:49:38.342970 | orchestrator | 2025-08-29 19:49:38 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:49:38.343101 | orchestrator | 2025-08-29 19:49:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:49:41.387418 | orchestrator | 2025-08-29 19:49:41 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:49:41.387504 | orchestrator | 2025-08-29 19:49:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:49:44.443021 | orchestrator | 2025-08-29 19:49:44 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:49:44.443126 | orchestrator | 2025-08-29 19:49:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:49:47.496914 | orchestrator | 2025-08-29 19:49:47 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:49:47.497172 | orchestrator | 2025-08-29 19:49:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:49:50.534129 | orchestrator | 2025-08-29 19:49:50 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:49:50.534205 | orchestrator | 2025-08-29 19:49:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:49:53.577934 | orchestrator | 2025-08-29 19:49:53 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:49:53.578057 | orchestrator | 2025-08-29 19:49:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:49:56.618453 | orchestrator | 2025-08-29 19:49:56 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:49:56.618546 | orchestrator | 2025-08-29 19:49:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:49:59.667921 | orchestrator | 2025-08-29 19:49:59 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:49:59.668093 | orchestrator | 2025-08-29 19:49:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:50:02.712366 | orchestrator | 2025-08-29 19:50:02 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:50:02.712457 | orchestrator | 2025-08-29 19:50:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:50:05.752822 | orchestrator | 2025-08-29 19:50:05 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:50:05.752908 | orchestrator | 2025-08-29 19:50:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:50:08.798491 | orchestrator | 2025-08-29 19:50:08 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:50:08.798569 | orchestrator | 2025-08-29 19:50:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:50:11.842005 | orchestrator | 2025-08-29 19:50:11 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:50:11.842149 | orchestrator | 2025-08-29 19:50:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:50:14.882686 | orchestrator | 2025-08-29 19:50:14 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:50:14.882772 | orchestrator | 2025-08-29 19:50:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:50:17.923375 | orchestrator | 2025-08-29 19:50:17 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:50:17.923471 | orchestrator | 2025-08-29 19:50:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:50:20.974091 | orchestrator | 2025-08-29 19:50:20 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:50:20.974191 | orchestrator | 2025-08-29 19:50:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:50:24.018915 | orchestrator | 2025-08-29 19:50:24 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:50:24.019024 | orchestrator | 2025-08-29 19:50:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:50:27.080718 | orchestrator | 2025-08-29 19:50:27 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:50:27.080794 | orchestrator | 2025-08-29 19:50:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:50:30.124076 | orchestrator | 2025-08-29 19:50:30 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:50:30.124159 | orchestrator | 2025-08-29 19:50:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:50:33.171927 | orchestrator | 2025-08-29 19:50:33 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:50:33.172022 | orchestrator | 2025-08-29 19:50:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:50:36.207239 | orchestrator | 2025-08-29 19:50:36 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:50:36.207354 | orchestrator | 2025-08-29 19:50:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:50:39.241541 | orchestrator | 2025-08-29 19:50:39 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:50:39.241643 | orchestrator | 2025-08-29 19:50:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:50:42.275815 | orchestrator | 2025-08-29 19:50:42 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:50:42.275928 | orchestrator | 2025-08-29 19:50:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:50:45.312195 | orchestrator | 2025-08-29 19:50:45 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:50:45.312296 | orchestrator | 2025-08-29 19:50:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:50:48.355327 | orchestrator | 2025-08-29 19:50:48 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:50:48.355562 | orchestrator | 2025-08-29 19:50:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:50:51.401755 | orchestrator | 2025-08-29 19:50:51 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:50:51.401845 | orchestrator | 2025-08-29 19:50:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:50:54.443262 | orchestrator | 2025-08-29 19:50:54 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:50:54.443368 | orchestrator | 2025-08-29 19:50:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:50:57.488882 | orchestrator | 2025-08-29 19:50:57 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:50:57.489986 | orchestrator | 2025-08-29 19:50:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:51:00.533294 | orchestrator | 2025-08-29 19:51:00 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:51:00.533398 | orchestrator | 2025-08-29 19:51:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:51:03.574706 | orchestrator | 2025-08-29 19:51:03 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:51:03.574792 | orchestrator | 2025-08-29 19:51:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:51:06.624112 | orchestrator | 2025-08-29 19:51:06 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:51:06.624230 | orchestrator | 2025-08-29 19:51:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:51:09.663119 | orchestrator | 2025-08-29 19:51:09 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:51:09.663222 | orchestrator | 2025-08-29 19:51:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:51:12.702645 | orchestrator | 2025-08-29 19:51:12 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:51:12.702810 | orchestrator | 2025-08-29 19:51:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:51:15.739071 | orchestrator | 2025-08-29 19:51:15 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state STARTED 2025-08-29 19:51:15.739177 | orchestrator | 2025-08-29 19:51:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 19:51:18.796396 | orchestrator | 2025-08-29 19:51:18 | INFO  | Task 8ecd3b05-8964-48f0-965f-c3f9ee43020c is in state SUCCESS 2025-08-29 19:51:18.798754 | orchestrator | 2025-08-29 19:51:18.798809 | orchestrator | 2025-08-29 19:51:18.798817 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 19:51:18.798824 | orchestrator | 2025-08-29 19:51:18.798831 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 19:51:18.798838 | orchestrator | Friday 29 August 2025 19:45:00 +0000 (0:00:00.252) 0:00:00.252 ********* 2025-08-29 19:51:18.798845 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:51:18.798853 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:51:18.798858 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:51:18.798865 | orchestrator | 2025-08-29 19:51:18.798871 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 19:51:18.798878 | orchestrator | Friday 29 August 2025 19:45:00 +0000 (0:00:00.297) 0:00:00.550 ********* 2025-08-29 19:51:18.798884 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-08-29 19:51:18.798890 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-08-29 19:51:18.798897 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-08-29 19:51:18.798903 | orchestrator | 2025-08-29 19:51:18.798909 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-08-29 19:51:18.798915 | orchestrator | 2025-08-29 19:51:18.798921 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-08-29 19:51:18.798927 | orchestrator | Friday 29 August 2025 19:45:00 +0000 (0:00:00.357) 0:00:00.907 ********* 2025-08-29 19:51:18.798934 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:51:18.798940 | orchestrator | 2025-08-29 19:51:18.798946 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-08-29 19:51:18.798952 | orchestrator | Friday 29 August 2025 19:45:01 +0000 (0:00:00.504) 0:00:01.412 ********* 2025-08-29 19:51:18.798958 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-08-29 19:51:18.798965 | orchestrator | 2025-08-29 19:51:18.798971 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-08-29 19:51:18.798977 | orchestrator | Friday 29 August 2025 19:45:04 +0000 (0:00:03.517) 0:00:04.930 ********* 2025-08-29 19:51:18.798983 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-08-29 19:51:18.798989 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-08-29 19:51:18.798996 | orchestrator | 2025-08-29 19:51:18.799002 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-08-29 19:51:18.799007 | orchestrator | Friday 29 August 2025 19:45:11 +0000 (0:00:06.571) 0:00:11.501 ********* 2025-08-29 19:51:18.799013 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 19:51:18.799019 | orchestrator | 2025-08-29 19:51:18.799025 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-08-29 19:51:18.799031 | orchestrator | Friday 29 August 2025 19:45:14 +0000 (0:00:03.339) 0:00:14.840 ********* 2025-08-29 19:51:18.799037 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 19:51:18.799044 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-08-29 19:51:18.799050 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-08-29 19:51:18.799056 | orchestrator | 2025-08-29 19:51:18.799062 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-08-29 19:51:18.799068 | orchestrator | Friday 29 August 2025 19:45:22 +0000 (0:00:08.146) 0:00:22.987 ********* 2025-08-29 19:51:18.799074 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 19:51:18.799080 | orchestrator | 2025-08-29 19:51:18.799102 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-08-29 19:51:18.799109 | orchestrator | Friday 29 August 2025 19:45:25 +0000 (0:00:03.058) 0:00:26.045 ********* 2025-08-29 19:51:18.799136 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-08-29 19:51:18.799143 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-08-29 19:51:18.799149 | orchestrator | 2025-08-29 19:51:18.799155 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-08-29 19:51:18.799160 | orchestrator | Friday 29 August 2025 19:45:33 +0000 (0:00:07.516) 0:00:33.562 ********* 2025-08-29 19:51:18.799166 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-08-29 19:51:18.799210 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-08-29 19:51:18.799219 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-08-29 19:51:18.799225 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-08-29 19:51:18.799232 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-08-29 19:51:18.799238 | orchestrator | 2025-08-29 19:51:18.799244 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-08-29 19:51:18.799250 | orchestrator | Friday 29 August 2025 19:45:49 +0000 (0:00:15.907) 0:00:49.469 ********* 2025-08-29 19:51:18.799258 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:51:18.799264 | orchestrator | 2025-08-29 19:51:18.799271 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-08-29 19:51:18.799277 | orchestrator | Friday 29 August 2025 19:45:49 +0000 (0:00:00.559) 0:00:50.028 ********* 2025-08-29 19:51:18.799301 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "extra_data": {"data": null, "details": "503 Service Unavailable: No server is available to handle this request.", "response": "

503 Service Unavailable

\nNo server is available to handle this request.\n\n"}, "msg": "HttpException: 503: Server Error for url: https://api-int.testbed.osism.xyz:8774/v2.1/flavors/amphora, 503 Service Unavailable: No server is available to handle this request."} 2025-08-29 19:51:18.799311 | orchestrator | 2025-08-29 19:51:18.799317 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:51:18.799325 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-08-29 19:51:18.799334 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:51:18.799342 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:51:18.799349 | orchestrator | 2025-08-29 19:51:18.799356 | orchestrator | 2025-08-29 19:51:18.799363 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:51:18.799370 | orchestrator | Friday 29 August 2025 19:45:53 +0000 (0:00:03.315) 0:00:53.343 ********* 2025-08-29 19:51:18.799377 | orchestrator | =============================================================================== 2025-08-29 19:51:18.799385 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.91s 2025-08-29 19:51:18.799392 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.15s 2025-08-29 19:51:18.799399 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.52s 2025-08-29 19:51:18.799406 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.57s 2025-08-29 19:51:18.799413 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.52s 2025-08-29 19:51:18.799421 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.34s 2025-08-29 19:51:18.799428 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.32s 2025-08-29 19:51:18.799435 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.06s 2025-08-29 19:51:18.799449 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.56s 2025-08-29 19:51:18.799456 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.50s 2025-08-29 19:51:18.799463 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.36s 2025-08-29 19:51:18.799469 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2025-08-29 19:51:18.799475 | orchestrator | 2025-08-29 19:51:18.799482 | orchestrator | 2025-08-29 19:51:18.799488 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 19:51:18.799495 | orchestrator | 2025-08-29 19:51:18.799502 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 19:51:18.799508 | orchestrator | Friday 29 August 2025 19:44:36 +0000 (0:00:00.165) 0:00:00.165 ********* 2025-08-29 19:51:18.799516 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:51:18.799523 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:51:18.799530 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:51:18.799536 | orchestrator | 2025-08-29 19:51:18.799542 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 19:51:18.799549 | orchestrator | Friday 29 August 2025 19:44:37 +0000 (0:00:00.266) 0:00:00.432 ********* 2025-08-29 19:51:18.799556 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-08-29 19:51:18.799568 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-08-29 19:51:18.799575 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-08-29 19:51:18.799581 | orchestrator | 2025-08-29 19:51:18.799588 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-08-29 19:51:18.799595 | orchestrator | 2025-08-29 19:51:18.799601 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-08-29 19:51:18.799608 | orchestrator | Friday 29 August 2025 19:44:37 +0000 (0:00:00.570) 0:00:01.002 ********* 2025-08-29 19:51:18.799615 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:51:18.799623 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:51:18.799630 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:51:18.799637 | orchestrator | 2025-08-29 19:51:18.799644 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:51:18.799650 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:51:18.799657 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:51:18.799717 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:51:18.799724 | orchestrator | 2025-08-29 19:51:18.799730 | orchestrator | 2025-08-29 19:51:18.799736 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:51:18.799742 | orchestrator | Friday 29 August 2025 19:46:46 +0000 (0:02:08.791) 0:02:09.794 ********* 2025-08-29 19:51:18.799876 | orchestrator | =============================================================================== 2025-08-29 19:51:18.799884 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 128.79s 2025-08-29 19:51:18.799891 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.57s 2025-08-29 19:51:18.799898 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.27s 2025-08-29 19:51:18.799905 | orchestrator | 2025-08-29 19:51:18.799912 | orchestrator | 2025-08-29 19:51:18.799919 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 19:51:18.799925 | orchestrator | 2025-08-29 19:51:18.799932 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-08-29 19:51:18.799946 | orchestrator | Friday 29 August 2025 19:42:43 +0000 (0:00:00.524) 0:00:00.524 ********* 2025-08-29 19:51:18.799954 | orchestrator | changed: [testbed-manager] 2025-08-29 19:51:18.799960 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:51:18.799966 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:51:18.799980 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:51:18.799987 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:51:18.799993 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:51:18.800000 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:51:18.800005 | orchestrator | 2025-08-29 19:51:18.800011 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 19:51:18.800018 | orchestrator | Friday 29 August 2025 19:42:44 +0000 (0:00:01.003) 0:00:01.528 ********* 2025-08-29 19:51:18.800024 | orchestrator | changed: [testbed-manager] 2025-08-29 19:51:18.800030 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:51:18.800037 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:51:18.800044 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:51:18.800050 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:51:18.800057 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:51:18.800064 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:51:18.800071 | orchestrator | 2025-08-29 19:51:18.800078 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 19:51:18.800085 | orchestrator | Friday 29 August 2025 19:42:45 +0000 (0:00:00.978) 0:00:02.506 ********* 2025-08-29 19:51:18.800091 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-08-29 19:51:18.800097 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-08-29 19:51:18.800103 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-08-29 19:51:18.800109 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-08-29 19:51:18.800115 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-08-29 19:51:18.800123 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-08-29 19:51:18.800130 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-08-29 19:51:18.800136 | orchestrator | 2025-08-29 19:51:18.800143 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-08-29 19:51:18.800150 | orchestrator | 2025-08-29 19:51:18.800157 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-08-29 19:51:18.800162 | orchestrator | Friday 29 August 2025 19:42:47 +0000 (0:00:01.478) 0:00:03.984 ********* 2025-08-29 19:51:18.800169 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:51:18.800174 | orchestrator | 2025-08-29 19:51:18.800180 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-08-29 19:51:18.800187 | orchestrator | Friday 29 August 2025 19:42:48 +0000 (0:00:01.670) 0:00:05.655 ********* 2025-08-29 19:51:18.800193 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-08-29 19:51:18.800200 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-08-29 19:51:18.800206 | orchestrator | 2025-08-29 19:51:18.800212 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-08-29 19:51:18.800218 | orchestrator | Friday 29 August 2025 19:42:53 +0000 (0:00:04.265) 0:00:09.921 ********* 2025-08-29 19:51:18.800225 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 19:51:18.800231 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 19:51:18.800237 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:51:18.800243 | orchestrator | 2025-08-29 19:51:18.800249 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-08-29 19:51:18.800255 | orchestrator | Friday 29 August 2025 19:42:57 +0000 (0:00:03.991) 0:00:13.913 ********* 2025-08-29 19:51:18.800262 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:51:18.800269 | orchestrator | 2025-08-29 19:51:18.800281 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-08-29 19:51:18.800287 | orchestrator | Friday 29 August 2025 19:42:58 +0000 (0:00:00.766) 0:00:14.680 ********* 2025-08-29 19:51:18.800293 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:51:18.800299 | orchestrator | 2025-08-29 19:51:18.800305 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-08-29 19:51:18.800317 | orchestrator | Friday 29 August 2025 19:42:59 +0000 (0:00:01.561) 0:00:16.241 ********* 2025-08-29 19:51:18.800324 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:51:18.800330 | orchestrator | 2025-08-29 19:51:18.800336 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-08-29 19:51:18.800342 | orchestrator | Friday 29 August 2025 19:43:03 +0000 (0:00:03.756) 0:00:19.998 ********* 2025-08-29 19:51:18.800348 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.800354 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.800360 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.800367 | orchestrator | 2025-08-29 19:51:18.800373 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-08-29 19:51:18.800379 | orchestrator | Friday 29 August 2025 19:43:03 +0000 (0:00:00.396) 0:00:20.394 ********* 2025-08-29 19:51:18.800384 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:51:18.800391 | orchestrator | 2025-08-29 19:51:18.800397 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-08-29 19:51:18.800403 | orchestrator | Friday 29 August 2025 19:43:37 +0000 (0:00:33.454) 0:00:53.849 ********* 2025-08-29 19:51:18.800409 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:51:18.800415 | orchestrator | 2025-08-29 19:51:18.800421 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-08-29 19:51:18.800427 | orchestrator | Friday 29 August 2025 19:43:51 +0000 (0:00:14.823) 0:01:08.673 ********* 2025-08-29 19:51:18.800433 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:51:18.800439 | orchestrator | 2025-08-29 19:51:18.800445 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-08-29 19:51:18.800451 | orchestrator | Friday 29 August 2025 19:44:03 +0000 (0:00:11.763) 0:01:20.436 ********* 2025-08-29 19:51:18.800458 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:51:18.800465 | orchestrator | 2025-08-29 19:51:18.800471 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-08-29 19:51:18.800477 | orchestrator | Friday 29 August 2025 19:44:04 +0000 (0:00:00.824) 0:01:21.261 ********* 2025-08-29 19:51:18.800483 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.800489 | orchestrator | 2025-08-29 19:51:18.800501 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-08-29 19:51:18.800508 | orchestrator | Friday 29 August 2025 19:44:05 +0000 (0:00:00.460) 0:01:21.722 ********* 2025-08-29 19:51:18.800514 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:51:18.800521 | orchestrator | 2025-08-29 19:51:18.800527 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-08-29 19:51:18.800534 | orchestrator | Friday 29 August 2025 19:44:05 +0000 (0:00:00.505) 0:01:22.227 ********* 2025-08-29 19:51:18.800540 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:51:18.800546 | orchestrator | 2025-08-29 19:51:18.800552 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-08-29 19:51:18.800559 | orchestrator | Friday 29 August 2025 19:44:24 +0000 (0:00:18.849) 0:01:41.076 ********* 2025-08-29 19:51:18.800565 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.800571 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.800577 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.800583 | orchestrator | 2025-08-29 19:51:18.800588 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-08-29 19:51:18.800594 | orchestrator | 2025-08-29 19:51:18.800600 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-08-29 19:51:18.800606 | orchestrator | Friday 29 August 2025 19:44:24 +0000 (0:00:00.337) 0:01:41.414 ********* 2025-08-29 19:51:18.800612 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:51:18.800618 | orchestrator | 2025-08-29 19:51:18.800624 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-08-29 19:51:18.800630 | orchestrator | Friday 29 August 2025 19:44:25 +0000 (0:00:00.583) 0:01:41.998 ********* 2025-08-29 19:51:18.800641 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.800647 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.800653 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:51:18.800674 | orchestrator | 2025-08-29 19:51:18.800681 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-08-29 19:51:18.800686 | orchestrator | Friday 29 August 2025 19:44:27 +0000 (0:00:02.139) 0:01:44.137 ********* 2025-08-29 19:51:18.800692 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.800697 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.800703 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:51:18.800709 | orchestrator | 2025-08-29 19:51:18.800714 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-08-29 19:51:18.800720 | orchestrator | Friday 29 August 2025 19:44:29 +0000 (0:00:02.144) 0:01:46.282 ********* 2025-08-29 19:51:18.800725 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.800731 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.800738 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.800743 | orchestrator | 2025-08-29 19:51:18.800749 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-08-29 19:51:18.800755 | orchestrator | Friday 29 August 2025 19:44:29 +0000 (0:00:00.355) 0:01:46.638 ********* 2025-08-29 19:51:18.800762 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-08-29 19:51:18.800769 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.800775 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-08-29 19:51:18.800783 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.800789 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-08-29 19:51:18.800795 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-08-29 19:51:18.800801 | orchestrator | 2025-08-29 19:51:18.800807 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-08-29 19:51:18.800814 | orchestrator | Friday 29 August 2025 19:44:39 +0000 (0:00:09.280) 0:01:55.918 ********* 2025-08-29 19:51:18.800821 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.800827 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.800832 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.800838 | orchestrator | 2025-08-29 19:51:18.800844 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-08-29 19:51:18.800851 | orchestrator | Friday 29 August 2025 19:44:39 +0000 (0:00:00.302) 0:01:56.220 ********* 2025-08-29 19:51:18.800857 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-08-29 19:51:18.800864 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.800870 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-08-29 19:51:18.800877 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.800883 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-08-29 19:51:18.800998 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.801005 | orchestrator | 2025-08-29 19:51:18.801012 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-08-29 19:51:18.801019 | orchestrator | Friday 29 August 2025 19:44:40 +0000 (0:00:00.574) 0:01:56.794 ********* 2025-08-29 19:51:18.801025 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.801032 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.801039 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:51:18.801045 | orchestrator | 2025-08-29 19:51:18.801052 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-08-29 19:51:18.801058 | orchestrator | Friday 29 August 2025 19:44:40 +0000 (0:00:00.456) 0:01:57.251 ********* 2025-08-29 19:51:18.801065 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.801071 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.801077 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:51:18.801084 | orchestrator | 2025-08-29 19:51:18.801090 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-08-29 19:51:18.801097 | orchestrator | Friday 29 August 2025 19:44:41 +0000 (0:00:01.066) 0:01:58.317 ********* 2025-08-29 19:51:18.801111 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.801117 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.801124 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:51:18.801129 | orchestrator | 2025-08-29 19:51:18.801135 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-08-29 19:51:18.801142 | orchestrator | Friday 29 August 2025 19:44:43 +0000 (0:00:01.929) 0:02:00.247 ********* 2025-08-29 19:51:18.801156 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.801163 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.801169 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:51:18.801175 | orchestrator | 2025-08-29 19:51:18.801181 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-08-29 19:51:18.801187 | orchestrator | Friday 29 August 2025 19:45:05 +0000 (0:00:21.498) 0:02:21.745 ********* 2025-08-29 19:51:18.801193 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.801200 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.801204 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:51:18.801207 | orchestrator | 2025-08-29 19:51:18.801211 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-08-29 19:51:18.801215 | orchestrator | Friday 29 August 2025 19:45:16 +0000 (0:00:11.325) 0:02:33.071 ********* 2025-08-29 19:51:18.801219 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:51:18.801223 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.801226 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.801230 | orchestrator | 2025-08-29 19:51:18.801260 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-08-29 19:51:18.801265 | orchestrator | Friday 29 August 2025 19:45:17 +0000 (0:00:01.138) 0:02:34.209 ********* 2025-08-29 19:51:18.801268 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.801272 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.801276 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:51:18.801280 | orchestrator | 2025-08-29 19:51:18.801284 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-08-29 19:51:18.801287 | orchestrator | Friday 29 August 2025 19:45:28 +0000 (0:00:10.857) 0:02:45.067 ********* 2025-08-29 19:51:18.801291 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.801295 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.801299 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.801302 | orchestrator | 2025-08-29 19:51:18.801306 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-08-29 19:51:18.801310 | orchestrator | Friday 29 August 2025 19:45:29 +0000 (0:00:01.050) 0:02:46.118 ********* 2025-08-29 19:51:18.801314 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.801317 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.801321 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.801325 | orchestrator | 2025-08-29 19:51:18.801329 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-08-29 19:51:18.801332 | orchestrator | 2025-08-29 19:51:18.801336 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-08-29 19:51:18.801340 | orchestrator | Friday 29 August 2025 19:45:29 +0000 (0:00:00.528) 0:02:46.647 ********* 2025-08-29 19:51:18.801344 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:51:18.801349 | orchestrator | 2025-08-29 19:51:18.801353 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-08-29 19:51:18.801356 | orchestrator | Friday 29 August 2025 19:45:30 +0000 (0:00:00.582) 0:02:47.230 ********* 2025-08-29 19:51:18.801360 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-08-29 19:51:18.801364 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-08-29 19:51:18.801368 | orchestrator | 2025-08-29 19:51:18.801372 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-08-29 19:51:18.801375 | orchestrator | Friday 29 August 2025 19:45:33 +0000 (0:00:03.262) 0:02:50.493 ********* 2025-08-29 19:51:18.801384 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-08-29 19:51:18.801393 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-08-29 19:51:18.801396 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-08-29 19:51:18.801400 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-08-29 19:51:18.801404 | orchestrator | 2025-08-29 19:51:18.801408 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-08-29 19:51:18.801411 | orchestrator | Friday 29 August 2025 19:45:40 +0000 (0:00:06.673) 0:02:57.166 ********* 2025-08-29 19:51:18.801415 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 19:51:18.801419 | orchestrator | 2025-08-29 19:51:18.801423 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-08-29 19:51:18.801426 | orchestrator | Friday 29 August 2025 19:45:43 +0000 (0:00:03.129) 0:03:00.295 ********* 2025-08-29 19:51:18.801430 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 19:51:18.801434 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-08-29 19:51:18.801438 | orchestrator | 2025-08-29 19:51:18.801441 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-08-29 19:51:18.801445 | orchestrator | Friday 29 August 2025 19:45:47 +0000 (0:00:03.872) 0:03:04.168 ********* 2025-08-29 19:51:18.801449 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 19:51:18.801453 | orchestrator | 2025-08-29 19:51:18.801456 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-08-29 19:51:18.801460 | orchestrator | Friday 29 August 2025 19:45:50 +0000 (0:00:03.371) 0:03:07.539 ********* 2025-08-29 19:51:18.801464 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-08-29 19:51:18.801468 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-08-29 19:51:18.801471 | orchestrator | 2025-08-29 19:51:18.801475 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-08-29 19:51:18.801479 | orchestrator | Friday 29 August 2025 19:45:58 +0000 (0:00:07.749) 0:03:15.289 ********* 2025-08-29 19:51:18.801491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 19:51:18.801497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 19:51:18.801515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 19:51:18.801524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.801529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.801533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.801537 | orchestrator | 2025-08-29 19:51:18.801541 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-08-29 19:51:18.801549 | orchestrator | Friday 29 August 2025 19:45:59 +0000 (0:00:01.341) 0:03:16.631 ********* 2025-08-29 19:51:18.801553 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.801557 | orchestrator | 2025-08-29 19:51:18.801561 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-08-29 19:51:18.801564 | orchestrator | Friday 29 August 2025 19:46:00 +0000 (0:00:00.146) 0:03:16.777 ********* 2025-08-29 19:51:18.801568 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.801572 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.801576 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.801579 | orchestrator | 2025-08-29 19:51:18.801583 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-08-29 19:51:18.801587 | orchestrator | Friday 29 August 2025 19:46:00 +0000 (0:00:00.309) 0:03:17.086 ********* 2025-08-29 19:51:18.801590 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 19:51:18.801594 | orchestrator | 2025-08-29 19:51:18.801598 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-08-29 19:51:18.801602 | orchestrator | Friday 29 August 2025 19:46:01 +0000 (0:00:00.881) 0:03:17.968 ********* 2025-08-29 19:51:18.801605 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.801609 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.801613 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.801616 | orchestrator | 2025-08-29 19:51:18.801620 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-08-29 19:51:18.801624 | orchestrator | Friday 29 August 2025 19:46:01 +0000 (0:00:00.287) 0:03:18.255 ********* 2025-08-29 19:51:18.801628 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:51:18.801631 | orchestrator | 2025-08-29 19:51:18.801638 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-08-29 19:51:18.801642 | orchestrator | Friday 29 August 2025 19:46:02 +0000 (0:00:00.551) 0:03:18.807 ********* 2025-08-29 19:51:18.801646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 19:51:18.801655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 19:51:18.801684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 19:51:18.801695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.801702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.801708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.801715 | orchestrator | 2025-08-29 19:51:18.801719 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-08-29 19:51:18.801723 | orchestrator | Friday 29 August 2025 19:46:04 +0000 (0:00:02.571) 0:03:21.378 ********* 2025-08-29 19:51:18.801730 | orchestrator | skipping:2025-08-29 19:51:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 19:51:18.802076 | orchestrator | [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 19:51:18.802103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 19:51:18.802109 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.802121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 19:51:18.802127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 19:51:18.802134 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.802147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 19:51:18.802159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 19:51:18.802165 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.802171 | orchestrator | 2025-08-29 19:51:18.802178 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-08-29 19:51:18.802184 | orchestrator | Friday 29 August 2025 19:46:05 +0000 (0:00:01.065) 0:03:22.444 ********* 2025-08-29 19:51:18.802194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 19:51:18.802201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 19:51:18.802207 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.802219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 19:51:18.802232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 19:51:18.802238 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.802248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 19:51:18.802256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 19:51:18.802263 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.802269 | orchestrator | 2025-08-29 19:51:18.802276 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-08-29 19:51:18.802282 | orchestrator | Friday 29 August 2025 19:46:06 +0000 (0:00:00.778) 0:03:23.223 ********* 2025-08-29 19:51:18.802293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 19:51:18.802306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 19:51:18.802316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 19:51:18.802323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.802329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.802345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.802351 | orchestrator | 2025-08-29 19:51:18.802358 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-08-29 19:51:18.802364 | orchestrator | Friday 29 August 2025 19:46:08 +0000 (0:00:02.296) 0:03:25.519 ********* 2025-08-29 19:51:18.802371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 19:51:18.802381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 19:51:18.802393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 19:51:18.802407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.802414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.802420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.802427 | orchestrator | 2025-08-29 19:51:18.802433 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-08-29 19:51:18.802439 | orchestrator | Friday 29 August 2025 19:46:14 +0000 (0:00:05.167) 0:03:30.687 ********* 2025-08-29 19:51:18.802449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 19:51:18.802460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 19:51:18.802466 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.802543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 19:51:18.802553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 19:51:18.802559 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.802569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 19:51:18.802581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 19:51:18.802588 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.802594 | orchestrator | 2025-08-29 19:51:18.802600 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-08-29 19:51:18.802607 | orchestrator | Friday 29 August 2025 19:46:14 +0000 (0:00:00.583) 0:03:31.271 ********* 2025-08-29 19:51:18.802613 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:51:18.802620 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:51:18.802626 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:51:18.802633 | orchestrator | 2025-08-29 19:51:18.802639 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-08-29 19:51:18.802646 | orchestrator | Friday 29 August 2025 19:46:16 +0000 (0:00:01.708) 0:03:32.979 ********* 2025-08-29 19:51:18.802653 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.802681 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.802688 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.802695 | orchestrator | 2025-08-29 19:51:18.802703 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-08-29 19:51:18.802709 | orchestrator | Friday 29 August 2025 19:46:16 +0000 (0:00:00.348) 0:03:33.328 ********* 2025-08-29 19:51:18.802715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 19:51:18.802726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 19:51:18.802738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 19:51:18.802750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.802758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.802765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.802772 | orchestrator | 2025-08-29 19:51:18.802779 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-08-29 19:51:18.802785 | orchestrator | Friday 29 August 2025 19:46:18 +0000 (0:00:02.070) 0:03:35.399 ********* 2025-08-29 19:51:18.802791 | orchestrator | 2025-08-29 19:51:18.802798 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-08-29 19:51:18.802805 | orchestrator | Friday 29 August 2025 19:46:18 +0000 (0:00:00.136) 0:03:35.535 ********* 2025-08-29 19:51:18.802811 | orchestrator | 2025-08-29 19:51:18.802818 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-08-29 19:51:18.802829 | orchestrator | Friday 29 August 2025 19:46:18 +0000 (0:00:00.128) 0:03:35.664 ********* 2025-08-29 19:51:18.802835 | orchestrator | 2025-08-29 19:51:18.802842 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-08-29 19:51:18.802849 | orchestrator | Friday 29 August 2025 19:46:19 +0000 (0:00:00.132) 0:03:35.797 ********* 2025-08-29 19:51:18.802856 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:51:18.802863 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:51:18.802869 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:51:18.802876 | orchestrator | 2025-08-29 19:51:18.802885 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-08-29 19:51:18.802892 | orchestrator | Friday 29 August 2025 19:46:38 +0000 (0:00:19.175) 0:03:54.972 ********* 2025-08-29 19:51:18.802900 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:51:18.802906 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:51:18.802913 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:51:18.802920 | orchestrator | 2025-08-29 19:51:18.802926 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-08-29 19:51:18.802931 | orchestrator | 2025-08-29 19:51:18.802937 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-08-29 19:51:18.802943 | orchestrator | Friday 29 August 2025 19:46:44 +0000 (0:00:06.029) 0:04:01.002 ********* 2025-08-29 19:51:18.802950 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:51:18.802956 | orchestrator | 2025-08-29 19:51:18.802962 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-08-29 19:51:18.802968 | orchestrator | Friday 29 August 2025 19:46:45 +0000 (0:00:01.263) 0:04:02.265 ********* 2025-08-29 19:51:18.802974 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:51:18.802981 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:51:18.802986 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:51:18.802993 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.802999 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.803006 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.803012 | orchestrator | 2025-08-29 19:51:18.803019 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-08-29 19:51:18.803025 | orchestrator | Friday 29 August 2025 19:46:46 +0000 (0:00:00.606) 0:04:02.872 ********* 2025-08-29 19:51:18.803031 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.803038 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.803044 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.803051 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:51:18.803057 | orchestrator | 2025-08-29 19:51:18.803063 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-08-29 19:51:18.803069 | orchestrator | Friday 29 August 2025 19:46:47 +0000 (0:00:01.042) 0:04:03.914 ********* 2025-08-29 19:51:18.803076 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-08-29 19:51:18.803082 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-08-29 19:51:18.803089 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-08-29 19:51:18.803095 | orchestrator | 2025-08-29 19:51:18.803105 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-08-29 19:51:18.803112 | orchestrator | Friday 29 August 2025 19:46:47 +0000 (0:00:00.693) 0:04:04.608 ********* 2025-08-29 19:51:18.803118 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-08-29 19:51:18.803124 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-08-29 19:51:18.803131 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-08-29 19:51:18.803137 | orchestrator | 2025-08-29 19:51:18.803143 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-08-29 19:51:18.803149 | orchestrator | Friday 29 August 2025 19:46:49 +0000 (0:00:01.177) 0:04:05.785 ********* 2025-08-29 19:51:18.803162 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-08-29 19:51:18.803169 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:51:18.803176 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-08-29 19:51:18.803182 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:51:18.803188 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-08-29 19:51:18.803194 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:51:18.803202 | orchestrator | 2025-08-29 19:51:18.803210 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-08-29 19:51:18.803217 | orchestrator | Friday 29 August 2025 19:46:49 +0000 (0:00:00.748) 0:04:06.534 ********* 2025-08-29 19:51:18.803224 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 19:51:18.803230 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 19:51:18.803237 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.803243 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 19:51:18.803249 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 19:51:18.803256 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.803262 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 19:51:18.803268 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 19:51:18.803273 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-08-29 19:51:18.803280 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-08-29 19:51:18.803289 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.803296 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-08-29 19:51:18.803302 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-08-29 19:51:18.803309 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-08-29 19:51:18.803315 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-08-29 19:51:18.803321 | orchestrator | 2025-08-29 19:51:18.803328 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-08-29 19:51:18.803334 | orchestrator | Friday 29 August 2025 19:46:50 +0000 (0:00:01.058) 0:04:07.592 ********* 2025-08-29 19:51:18.803340 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.803346 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.803353 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.803362 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:51:18.803368 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:51:18.803374 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:51:18.803380 | orchestrator | 2025-08-29 19:51:18.803387 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-08-29 19:51:18.803393 | orchestrator | Friday 29 August 2025 19:46:52 +0000 (0:00:01.472) 0:04:09.065 ********* 2025-08-29 19:51:18.803399 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.803404 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.803410 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.803417 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:51:18.803423 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:51:18.803429 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:51:18.803435 | orchestrator | 2025-08-29 19:51:18.803441 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-08-29 19:51:18.803447 | orchestrator | Friday 29 August 2025 19:46:53 +0000 (0:00:01.579) 0:04:10.644 ********* 2025-08-29 19:51:18.803455 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 19:51:18.803473 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 19:51:18.803481 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 19:51:18.803487 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 19:51:18.803497 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 19:51:18.803503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 19:51:18.803514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 19:51:18.803525 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.803532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 19:51:18.803539 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 19:51:18.803549 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.803555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.803569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.803580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.803586 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.803593 | orchestrator | 2025-08-29 19:51:18.803600 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-08-29 19:51:18.803606 | orchestrator | Friday 29 August 2025 19:46:56 +0000 (0:00:02.544) 0:04:13.189 ********* 2025-08-29 19:51:18.803612 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 19:51:18.803620 | orchestrator | 2025-08-29 19:51:18.803626 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-08-29 19:51:18.803632 | orchestrator | Friday 29 August 2025 19:46:57 +0000 (0:00:01.293) 0:04:14.483 ********* 2025-08-29 19:51:18.803639 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 19:51:18.803649 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 19:51:18.803686 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 19:51:18.803698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 19:51:18.803705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 19:51:18.803712 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 19:51:18.803720 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 19:51:18.803730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 19:51:18.803742 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 19:51:18.803749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.803761 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.803768 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.803775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.803785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.803796 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.803803 | orchestrator | 2025-08-29 19:51:18.803810 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-08-29 19:51:18.803817 | orchestrator | Friday 29 August 2025 19:47:01 +0000 (0:00:03.652) 0:04:18.136 ********* 2025-08-29 19:51:18.803829 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 19:51:18.803836 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 19:51:18.803843 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 19:51:18.803850 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:51:18.803861 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 19:51:18.803872 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 19:51:18.804182 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 19:51:18.804201 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:51:18.804208 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 19:51:18.804219 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 19:51:18.804228 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 19:51:18.804246 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:51:18.804262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 19:51:18.804268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 19:51:18.804275 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.804287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 19:51:18.804294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 19:51:18.804301 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.804308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 19:51:18.804315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 19:51:18.804326 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.804333 | orchestrator | 2025-08-29 19:51:18.804340 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-08-29 19:51:18.804347 | orchestrator | Friday 29 August 2025 19:47:03 +0000 (0:00:01.598) 0:04:19.734 ********* 2025-08-29 19:51:18.804357 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 19:51:18.804365 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 19:51:18.804375 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 19:51:18.804382 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:51:18.804389 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 19:51:18.804396 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 19:51:18.804409 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 19:51:18.804416 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:51:18.804423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 19:51:18.804431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 19:51:18.804440 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.804451 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 19:51:18.804458 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 19:51:18.804475 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 19:51:18.804482 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:51:18.804491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 19:51:18.804498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 19:51:18.804504 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.804510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 19:51:18.804522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 19:51:18.804528 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.804535 | orchestrator | 2025-08-29 19:51:18.804542 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-08-29 19:51:18.804549 | orchestrator | Friday 29 August 2025 19:47:05 +0000 (0:00:02.371) 0:04:22.106 ********* 2025-08-29 19:51:18.804555 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.804562 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.804567 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.804573 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 19:51:18.804588 | orchestrator | 2025-08-29 19:51:18.804595 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-08-29 19:51:18.804602 | orchestrator | Friday 29 August 2025 19:47:06 +0000 (0:00:01.085) 0:04:23.192 ********* 2025-08-29 19:51:18.804608 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 19:51:18.804614 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-08-29 19:51:18.804620 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-08-29 19:51:18.804626 | orchestrator | 2025-08-29 19:51:18.804632 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-08-29 19:51:18.804638 | orchestrator | Friday 29 August 2025 19:47:07 +0000 (0:00:01.009) 0:04:24.201 ********* 2025-08-29 19:51:18.804643 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 19:51:18.804649 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-08-29 19:51:18.804654 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-08-29 19:51:18.804681 | orchestrator | 2025-08-29 19:51:18.804687 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-08-29 19:51:18.804693 | orchestrator | Friday 29 August 2025 19:47:08 +0000 (0:00:00.989) 0:04:25.191 ********* 2025-08-29 19:51:18.804699 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:51:18.804704 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:51:18.804710 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:51:18.804716 | orchestrator | 2025-08-29 19:51:18.804723 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-08-29 19:51:18.804728 | orchestrator | Friday 29 August 2025 19:47:09 +0000 (0:00:00.583) 0:04:25.774 ********* 2025-08-29 19:51:18.804735 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:51:18.804741 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:51:18.804747 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:51:18.804753 | orchestrator | 2025-08-29 19:51:18.804759 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-08-29 19:51:18.804766 | orchestrator | Friday 29 August 2025 19:47:09 +0000 (0:00:00.718) 0:04:26.493 ********* 2025-08-29 19:51:18.804772 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-08-29 19:51:18.804778 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-08-29 19:51:18.804784 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-08-29 19:51:18.804791 | orchestrator | 2025-08-29 19:51:18.804796 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-08-29 19:51:18.804802 | orchestrator | Friday 29 August 2025 19:47:10 +0000 (0:00:01.165) 0:04:27.659 ********* 2025-08-29 19:51:18.804813 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-08-29 19:51:18.804820 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-08-29 19:51:18.804826 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-08-29 19:51:18.804833 | orchestrator | 2025-08-29 19:51:18.804838 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-08-29 19:51:18.804845 | orchestrator | Friday 29 August 2025 19:47:12 +0000 (0:00:01.216) 0:04:28.876 ********* 2025-08-29 19:51:18.804852 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-08-29 19:51:18.804858 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-08-29 19:51:18.804864 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-08-29 19:51:18.804871 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-08-29 19:51:18.804877 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-08-29 19:51:18.804884 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-08-29 19:51:18.804889 | orchestrator | 2025-08-29 19:51:18.804896 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-08-29 19:51:18.804902 | orchestrator | Friday 29 August 2025 19:47:15 +0000 (0:00:03.728) 0:04:32.605 ********* 2025-08-29 19:51:18.804908 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:51:18.804914 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:51:18.804929 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:51:18.804935 | orchestrator | 2025-08-29 19:51:18.804941 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-08-29 19:51:18.804947 | orchestrator | Friday 29 August 2025 19:47:16 +0000 (0:00:00.547) 0:04:33.153 ********* 2025-08-29 19:51:18.804953 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:51:18.804960 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:51:18.804966 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:51:18.804973 | orchestrator | 2025-08-29 19:51:18.804979 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-08-29 19:51:18.804985 | orchestrator | Friday 29 August 2025 19:47:16 +0000 (0:00:00.322) 0:04:33.475 ********* 2025-08-29 19:51:18.804992 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:51:18.804998 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:51:18.805005 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:51:18.805012 | orchestrator | 2025-08-29 19:51:18.805019 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-08-29 19:51:18.805025 | orchestrator | Friday 29 August 2025 19:47:18 +0000 (0:00:01.391) 0:04:34.866 ********* 2025-08-29 19:51:18.805038 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-08-29 19:51:18.805045 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-08-29 19:51:18.805051 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-08-29 19:51:18.805057 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-08-29 19:51:18.805064 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-08-29 19:51:18.805070 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-08-29 19:51:18.805077 | orchestrator | 2025-08-29 19:51:18.805084 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-08-29 19:51:18.805091 | orchestrator | Friday 29 August 2025 19:47:21 +0000 (0:00:03.657) 0:04:38.523 ********* 2025-08-29 19:51:18.805097 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 19:51:18.805104 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 19:51:18.805110 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 19:51:18.805117 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 19:51:18.805124 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:51:18.805131 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 19:51:18.805137 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:51:18.805144 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 19:51:18.805150 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:51:18.805157 | orchestrator | 2025-08-29 19:51:18.805163 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-08-29 19:51:18.805168 | orchestrator | Friday 29 August 2025 19:47:25 +0000 (0:00:03.839) 0:04:42.363 ********* 2025-08-29 19:51:18.805175 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:51:18.805180 | orchestrator | 2025-08-29 19:51:18.805187 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-08-29 19:51:18.805193 | orchestrator | Friday 29 August 2025 19:47:25 +0000 (0:00:00.158) 0:04:42.522 ********* 2025-08-29 19:51:18.805200 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:51:18.805206 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:51:18.805212 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:51:18.805219 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.805225 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.805236 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.805242 | orchestrator | 2025-08-29 19:51:18.805247 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-08-29 19:51:18.805253 | orchestrator | Friday 29 August 2025 19:47:26 +0000 (0:00:00.648) 0:04:43.170 ********* 2025-08-29 19:51:18.805259 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 19:51:18.805265 | orchestrator | 2025-08-29 19:51:18.805271 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-08-29 19:51:18.805277 | orchestrator | Friday 29 August 2025 19:47:27 +0000 (0:00:00.691) 0:04:43.862 ********* 2025-08-29 19:51:18.805287 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:51:18.805293 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:51:18.805300 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:51:18.805305 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.805311 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.805317 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.805323 | orchestrator | 2025-08-29 19:51:18.805329 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-08-29 19:51:18.805335 | orchestrator | Friday 29 August 2025 19:47:28 +0000 (0:00:00.823) 0:04:44.685 ********* 2025-08-29 19:51:18.805341 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 19:51:18.805356 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 19:51:18.805362 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 19:51:18.805369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 19:51:18.805384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 19:51:18.805391 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 19:51:18.805398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 19:51:18.805407 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 19:51:18.805414 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 19:51:18.805420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.805430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.805439 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.805446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.805452 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.805462 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.805468 | orchestrator | 2025-08-29 19:51:18.805474 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-08-29 19:51:18.805480 | orchestrator | Friday 29 August 2025 19:47:31 +0000 (0:00:03.724) 0:04:48.410 ********* 2025-08-29 19:51:18.805487 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 19:51:18.805541 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 19:51:18.805551 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 19:51:18.805558 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 19:51:18.805568 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 19:51:18.805574 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 19:51:18.805584 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.805594 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.805600 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.805606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 19:51:18.805616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 19:51:18.805630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 19:51:18.805636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.805642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.805651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.805657 | orchestrator | 2025-08-29 19:51:18.805712 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-08-29 19:51:18.805719 | orchestrator | Friday 29 August 2025 19:47:38 +0000 (0:00:06.823) 0:04:55.233 ********* 2025-08-29 19:51:18.805724 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:51:18.805730 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:51:18.805736 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:51:18.805743 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.805748 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.805754 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.805760 | orchestrator | 2025-08-29 19:51:18.805766 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-08-29 19:51:18.805772 | orchestrator | Friday 29 August 2025 19:47:39 +0000 (0:00:01.395) 0:04:56.629 ********* 2025-08-29 19:51:18.805778 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-08-29 19:51:18.805784 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-08-29 19:51:18.805790 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-08-29 19:51:18.805796 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-08-29 19:51:18.805802 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-08-29 19:51:18.805808 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.805814 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-08-29 19:51:18.805828 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-08-29 19:51:18.805834 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.805840 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-08-29 19:51:18.805846 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-08-29 19:51:18.805853 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.805859 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-08-29 19:51:18.805865 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-08-29 19:51:18.805872 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-08-29 19:51:18.805878 | orchestrator | 2025-08-29 19:51:18.805884 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-08-29 19:51:18.805891 | orchestrator | Friday 29 August 2025 19:47:43 +0000 (0:00:03.574) 0:05:00.203 ********* 2025-08-29 19:51:18.805897 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:51:18.805904 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:51:18.805910 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:51:18.805917 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.805923 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.805929 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.805936 | orchestrator | 2025-08-29 19:51:18.805942 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-08-29 19:51:18.805948 | orchestrator | Friday 29 August 2025 19:47:44 +0000 (0:00:00.599) 0:05:00.803 ********* 2025-08-29 19:51:18.805954 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-08-29 19:51:18.805961 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-08-29 19:51:18.805967 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-08-29 19:51:18.805973 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-08-29 19:51:18.805980 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-08-29 19:51:18.805985 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-08-29 19:51:18.805991 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-08-29 19:51:18.805997 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-08-29 19:51:18.806003 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-08-29 19:51:18.806013 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-08-29 19:51:18.806062 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.806069 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-08-29 19:51:18.806076 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.806082 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-08-29 19:51:18.806088 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.806094 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-08-29 19:51:18.806099 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-08-29 19:51:18.806111 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-08-29 19:51:18.806118 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-08-29 19:51:18.806125 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-08-29 19:51:18.806131 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-08-29 19:51:18.806138 | orchestrator | 2025-08-29 19:51:18.806144 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-08-29 19:51:18.806150 | orchestrator | Friday 29 August 2025 19:47:49 +0000 (0:00:05.665) 0:05:06.468 ********* 2025-08-29 19:51:18.806157 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 19:51:18.806163 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 19:51:18.806170 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 19:51:18.806176 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-08-29 19:51:18.806189 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-08-29 19:51:18.806196 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 19:51:18.806202 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 19:51:18.806209 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 19:51:18.806215 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-08-29 19:51:18.806222 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 19:51:18.806228 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 19:51:18.806234 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 19:51:18.806241 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-08-29 19:51:18.806248 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.806254 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 19:51:18.806260 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-08-29 19:51:18.806267 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.806273 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-08-29 19:51:18.806280 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.806286 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 19:51:18.806293 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 19:51:18.806299 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 19:51:18.806306 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 19:51:18.806313 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 19:51:18.806319 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 19:51:18.806326 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 19:51:18.806333 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 19:51:18.806339 | orchestrator | 2025-08-29 19:51:18.806345 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-08-29 19:51:18.806356 | orchestrator | Friday 29 August 2025 19:47:56 +0000 (0:00:07.014) 0:05:13.483 ********* 2025-08-29 19:51:18.806362 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:51:18.806367 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:51:18.806373 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:51:18.806378 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.806383 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.806389 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.806395 | orchestrator | 2025-08-29 19:51:18.806401 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-08-29 19:51:18.806411 | orchestrator | Friday 29 August 2025 19:47:57 +0000 (0:00:00.699) 0:05:14.182 ********* 2025-08-29 19:51:18.806417 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:51:18.806423 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:51:18.806429 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:51:18.806435 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.806441 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.806447 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.806453 | orchestrator | 2025-08-29 19:51:18.806459 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-08-29 19:51:18.806465 | orchestrator | Friday 29 August 2025 19:47:58 +0000 (0:00:00.561) 0:05:14.744 ********* 2025-08-29 19:51:18.806471 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.806477 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.806483 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.806489 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:51:18.806496 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:51:18.806502 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:51:18.806508 | orchestrator | 2025-08-29 19:51:18.806514 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-08-29 19:51:18.806520 | orchestrator | Friday 29 August 2025 19:48:00 +0000 (0:00:01.995) 0:05:16.740 ********* 2025-08-29 19:51:18.806527 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 19:51:18.806540 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 19:51:18.806548 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 19:51:18.806560 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:51:18.806566 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 19:51:18.806576 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 19:51:18.806583 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 19:51:18.806590 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:51:18.806602 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 19:51:18.806609 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 19:51:18.806620 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 19:51:18.806626 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:51:18.806635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 19:51:18.806642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 19:51:18.806649 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.806655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 19:51:18.806681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 19:51:18.806688 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.806694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 19:51:18.806705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 19:51:18.806711 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.806717 | orchestrator | 2025-08-29 19:51:18.806723 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-08-29 19:51:18.806730 | orchestrator | Friday 29 August 2025 19:48:01 +0000 (0:00:01.215) 0:05:17.956 ********* 2025-08-29 19:51:18.806736 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-08-29 19:51:18.806742 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-08-29 19:51:18.806748 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:51:18.806754 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-08-29 19:51:18.806760 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-08-29 19:51:18.806767 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:51:18.806773 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-08-29 19:51:18.806780 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-08-29 19:51:18.806786 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:51:18.806793 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-08-29 19:51:18.806799 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-08-29 19:51:18.806805 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.806811 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-08-29 19:51:18.806821 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-08-29 19:51:18.806827 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.806833 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-08-29 19:51:18.806840 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-08-29 19:51:18.806846 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.806852 | orchestrator | 2025-08-29 19:51:18.806860 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-08-29 19:51:18.806866 | orchestrator | Friday 29 August 2025 19:48:02 +0000 (0:00:00.893) 0:05:18.849 ********* 2025-08-29 19:51:18.806873 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 19:51:18.806886 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 19:51:18.806899 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 19:51:18.806906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 19:51:18.806917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 19:51:18.806924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 19:51:18.806931 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 19:51:18.806950 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 19:51:18.806956 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 19:51:18.806963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.806970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.806979 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.806986 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.806998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.807009 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 19:51:18.807015 | orchestrator | 2025-08-29 19:51:18.807021 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-08-29 19:51:18.807027 | orchestrator | Friday 29 August 2025 19:48:05 +0000 (0:00:02.925) 0:05:21.775 ********* 2025-08-29 19:51:18.807033 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:51:18.807040 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:51:18.807046 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:51:18.807053 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.807060 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.807065 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.807072 | orchestrator | 2025-08-29 19:51:18.807077 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 19:51:18.807083 | orchestrator | Friday 29 August 2025 19:48:05 +0000 (0:00:00.854) 0:05:22.629 ********* 2025-08-29 19:51:18.807090 | orchestrator | 2025-08-29 19:51:18.807097 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 19:51:18.807103 | orchestrator | Friday 29 August 2025 19:48:06 +0000 (0:00:00.146) 0:05:22.775 ********* 2025-08-29 19:51:18.807110 | orchestrator | 2025-08-29 19:51:18.807116 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 19:51:18.807123 | orchestrator | Friday 29 August 2025 19:48:06 +0000 (0:00:00.135) 0:05:22.911 ********* 2025-08-29 19:51:18.807130 | orchestrator | 2025-08-29 19:51:18.807137 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 19:51:18.807143 | orchestrator | Friday 29 August 2025 19:48:06 +0000 (0:00:00.136) 0:05:23.048 ********* 2025-08-29 19:51:18.807150 | orchestrator | 2025-08-29 19:51:18.807156 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 19:51:18.807163 | orchestrator | Friday 29 August 2025 19:48:06 +0000 (0:00:00.136) 0:05:23.184 ********* 2025-08-29 19:51:18.807170 | orchestrator | 2025-08-29 19:51:18.807176 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 19:51:18.807183 | orchestrator | Friday 29 August 2025 19:48:06 +0000 (0:00:00.129) 0:05:23.314 ********* 2025-08-29 19:51:18.807189 | orchestrator | 2025-08-29 19:51:18.807196 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-08-29 19:51:18.807202 | orchestrator | Friday 29 August 2025 19:48:06 +0000 (0:00:00.308) 0:05:23.622 ********* 2025-08-29 19:51:18.807209 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:51:18.807215 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:51:18.807225 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:51:18.807232 | orchestrator | 2025-08-29 19:51:18.807239 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-08-29 19:51:18.807250 | orchestrator | Friday 29 August 2025 19:48:18 +0000 (0:00:11.852) 0:05:35.475 ********* 2025-08-29 19:51:18.807257 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:51:18.807263 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:51:18.807270 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:51:18.807276 | orchestrator | 2025-08-29 19:51:18.807282 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-08-29 19:51:18.807289 | orchestrator | Friday 29 August 2025 19:48:32 +0000 (0:00:14.111) 0:05:49.587 ********* 2025-08-29 19:51:18.807295 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:51:18.807302 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:51:18.807308 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:51:18.807314 | orchestrator | 2025-08-29 19:51:18.807321 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-08-29 19:51:18.807327 | orchestrator | Friday 29 August 2025 19:48:59 +0000 (0:00:26.836) 0:06:16.423 ********* 2025-08-29 19:51:18.807333 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:51:18.807340 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:51:18.807347 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:51:18.807353 | orchestrator | 2025-08-29 19:51:18.807360 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-08-29 19:51:18.807366 | orchestrator | Friday 29 August 2025 19:49:42 +0000 (0:00:42.332) 0:06:58.755 ********* 2025-08-29 19:51:18.807372 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:51:18.807378 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:51:18.807383 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:51:18.807389 | orchestrator | 2025-08-29 19:51:18.807395 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-08-29 19:51:18.807400 | orchestrator | Friday 29 August 2025 19:49:42 +0000 (0:00:00.787) 0:06:59.543 ********* 2025-08-29 19:51:18.807406 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:51:18.807412 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:51:18.807417 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:51:18.807423 | orchestrator | 2025-08-29 19:51:18.807429 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-08-29 19:51:18.807436 | orchestrator | Friday 29 August 2025 19:49:43 +0000 (0:00:00.753) 0:07:00.297 ********* 2025-08-29 19:51:18.807442 | orchestrator | changed: [testbed-node-4] 2025-08-29 19:51:18.807447 | orchestrator | changed: [testbed-node-3] 2025-08-29 19:51:18.807453 | orchestrator | changed: [testbed-node-5] 2025-08-29 19:51:18.807459 | orchestrator | 2025-08-29 19:51:18.807470 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-08-29 19:51:18.807477 | orchestrator | Friday 29 August 2025 19:50:12 +0000 (0:00:28.719) 0:07:29.016 ********* 2025-08-29 19:51:18.807484 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:51:18.807489 | orchestrator | 2025-08-29 19:51:18.807495 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-08-29 19:51:18.807501 | orchestrator | Friday 29 August 2025 19:50:12 +0000 (0:00:00.129) 0:07:29.146 ********* 2025-08-29 19:51:18.807507 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:51:18.807513 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:51:18.807519 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.807525 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.807531 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.807537 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-08-29 19:51:18.807544 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-08-29 19:51:18.807550 | orchestrator | 2025-08-29 19:51:18.807556 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-08-29 19:51:18.807562 | orchestrator | Friday 29 August 2025 19:50:34 +0000 (0:00:22.360) 0:07:51.506 ********* 2025-08-29 19:51:18.807569 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:51:18.807580 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.807587 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.807593 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.807599 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:51:18.807605 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:51:18.807611 | orchestrator | 2025-08-29 19:51:18.807618 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-08-29 19:51:18.807624 | orchestrator | Friday 29 August 2025 19:50:43 +0000 (0:00:08.413) 0:07:59.920 ********* 2025-08-29 19:51:18.807630 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:51:18.807637 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.807644 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.807650 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:51:18.807656 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.807711 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2025-08-29 19:51:18.807719 | orchestrator | 2025-08-29 19:51:18.807725 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-08-29 19:51:18.807732 | orchestrator | Friday 29 August 2025 19:50:46 +0000 (0:00:03.728) 0:08:03.648 ********* 2025-08-29 19:51:18.807737 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-08-29 19:51:18.807744 | orchestrator | 2025-08-29 19:51:18.807750 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-08-29 19:51:18.807756 | orchestrator | Friday 29 August 2025 19:50:58 +0000 (0:00:11.473) 0:08:15.122 ********* 2025-08-29 19:51:18.807761 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-08-29 19:51:18.807768 | orchestrator | 2025-08-29 19:51:18.807774 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-08-29 19:51:18.807781 | orchestrator | Friday 29 August 2025 19:50:59 +0000 (0:00:01.421) 0:08:16.543 ********* 2025-08-29 19:51:18.807787 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:51:18.807794 | orchestrator | 2025-08-29 19:51:18.807800 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-08-29 19:51:18.807824 | orchestrator | Friday 29 August 2025 19:51:01 +0000 (0:00:01.313) 0:08:17.857 ********* 2025-08-29 19:51:18.807831 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-08-29 19:51:18.807838 | orchestrator | 2025-08-29 19:51:18.807852 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-08-29 19:51:18.807858 | orchestrator | Friday 29 August 2025 19:51:11 +0000 (0:00:10.316) 0:08:28.173 ********* 2025-08-29 19:51:18.807864 | orchestrator | ok: [testbed-node-3] 2025-08-29 19:51:18.807870 | orchestrator | ok: [testbed-node-4] 2025-08-29 19:51:18.807877 | orchestrator | ok: [testbed-node-5] 2025-08-29 19:51:18.807882 | orchestrator | ok: [testbed-node-0] 2025-08-29 19:51:18.807888 | orchestrator | ok: [testbed-node-1] 2025-08-29 19:51:18.807894 | orchestrator | ok: [testbed-node-2] 2025-08-29 19:51:18.807901 | orchestrator | 2025-08-29 19:51:18.807906 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-08-29 19:51:18.807913 | orchestrator | 2025-08-29 19:51:18.807919 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-08-29 19:51:18.807925 | orchestrator | Friday 29 August 2025 19:51:13 +0000 (0:00:01.777) 0:08:29.950 ********* 2025-08-29 19:51:18.807931 | orchestrator | changed: [testbed-node-0] 2025-08-29 19:51:18.807937 | orchestrator | changed: [testbed-node-1] 2025-08-29 19:51:18.807943 | orchestrator | changed: [testbed-node-2] 2025-08-29 19:51:18.807949 | orchestrator | 2025-08-29 19:51:18.807954 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-08-29 19:51:18.807961 | orchestrator | 2025-08-29 19:51:18.807968 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-08-29 19:51:18.807974 | orchestrator | Friday 29 August 2025 19:51:14 +0000 (0:00:01.103) 0:08:31.053 ********* 2025-08-29 19:51:18.807980 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.807986 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.807998 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.808005 | orchestrator | 2025-08-29 19:51:18.808012 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-08-29 19:51:18.808019 | orchestrator | 2025-08-29 19:51:18.808026 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-08-29 19:51:18.808031 | orchestrator | Friday 29 August 2025 19:51:14 +0000 (0:00:00.525) 0:08:31.579 ********* 2025-08-29 19:51:18.808038 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-08-29 19:51:18.808045 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-08-29 19:51:18.808052 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-08-29 19:51:18.808059 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-08-29 19:51:18.808065 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-08-29 19:51:18.808078 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-08-29 19:51:18.808085 | orchestrator | skipping: [testbed-node-3] 2025-08-29 19:51:18.808092 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-08-29 19:51:18.808099 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-08-29 19:51:18.808106 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-08-29 19:51:18.808112 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-08-29 19:51:18.808119 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-08-29 19:51:18.808127 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-08-29 19:51:18.808133 | orchestrator | skipping: [testbed-node-4] 2025-08-29 19:51:18.808140 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-08-29 19:51:18.808147 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-08-29 19:51:18.808155 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-08-29 19:51:18.808161 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-08-29 19:51:18.808168 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-08-29 19:51:18.808176 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-08-29 19:51:18.808182 | orchestrator | skipping: [testbed-node-5] 2025-08-29 19:51:18.808189 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-08-29 19:51:18.808196 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-08-29 19:51:18.808203 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-08-29 19:51:18.808210 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-08-29 19:51:18.808217 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-08-29 19:51:18.808223 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-08-29 19:51:18.808229 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.808236 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-08-29 19:51:18.808243 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-08-29 19:51:18.808250 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-08-29 19:51:18.808257 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-08-29 19:51:18.808264 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-08-29 19:51:18.808271 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-08-29 19:51:18.808278 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.808285 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-08-29 19:51:18.808291 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-08-29 19:51:18.808297 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-08-29 19:51:18.808303 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-08-29 19:51:18.808310 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-08-29 19:51:18.808323 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-08-29 19:51:18.808330 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.808337 | orchestrator | 2025-08-29 19:51:18.808345 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-08-29 19:51:18.808351 | orchestrator | 2025-08-29 19:51:18.808362 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-08-29 19:51:18.808369 | orchestrator | Friday 29 August 2025 19:51:16 +0000 (0:00:01.366) 0:08:32.945 ********* 2025-08-29 19:51:18.808377 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-08-29 19:51:18.808384 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-08-29 19:51:18.808390 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.808397 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-08-29 19:51:18.808403 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-08-29 19:51:18.808409 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.808414 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-08-29 19:51:18.808419 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-08-29 19:51:18.808425 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.808430 | orchestrator | 2025-08-29 19:51:18.808436 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-08-29 19:51:18.808442 | orchestrator | 2025-08-29 19:51:18.808449 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-08-29 19:51:18.808455 | orchestrator | Friday 29 August 2025 19:51:17 +0000 (0:00:00.778) 0:08:33.724 ********* 2025-08-29 19:51:18.808461 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.808468 | orchestrator | 2025-08-29 19:51:18.808475 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-08-29 19:51:18.808482 | orchestrator | 2025-08-29 19:51:18.808489 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-08-29 19:51:18.808496 | orchestrator | Friday 29 August 2025 19:51:17 +0000 (0:00:00.694) 0:08:34.419 ********* 2025-08-29 19:51:18.808503 | orchestrator | skipping: [testbed-node-0] 2025-08-29 19:51:18.808510 | orchestrator | skipping: [testbed-node-1] 2025-08-29 19:51:18.808517 | orchestrator | skipping: [testbed-node-2] 2025-08-29 19:51:18.808524 | orchestrator | 2025-08-29 19:51:18.808531 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 19:51:18.808538 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 19:51:18.808546 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-08-29 19:51:18.808557 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-08-29 19:51:18.808563 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-08-29 19:51:18.808569 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-08-29 19:51:18.808575 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-08-29 19:51:18.808580 | orchestrator | testbed-node-5 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-08-29 19:51:18.808586 | orchestrator | 2025-08-29 19:51:18.808591 | orchestrator | 2025-08-29 19:51:18.808597 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 19:51:18.808603 | orchestrator | Friday 29 August 2025 19:51:18 +0000 (0:00:00.465) 0:08:34.884 ********* 2025-08-29 19:51:18.808614 | orchestrator | =============================================================================== 2025-08-29 19:51:18.808621 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 42.33s 2025-08-29 19:51:18.808627 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 33.46s 2025-08-29 19:51:18.808633 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 28.72s 2025-08-29 19:51:18.808640 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 26.84s 2025-08-29 19:51:18.808646 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.36s 2025-08-29 19:51:18.808653 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.50s 2025-08-29 19:51:18.808674 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 19.18s 2025-08-29 19:51:18.808680 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.85s 2025-08-29 19:51:18.808686 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.82s 2025-08-29 19:51:18.808693 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 14.11s 2025-08-29 19:51:18.808698 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 11.85s 2025-08-29 19:51:18.808704 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.76s 2025-08-29 19:51:18.808710 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.47s 2025-08-29 19:51:18.808716 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.33s 2025-08-29 19:51:18.808723 | orchestrator | nova-cell : Create cell ------------------------------------------------ 10.86s 2025-08-29 19:51:18.808730 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 10.32s 2025-08-29 19:51:18.808737 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.28s 2025-08-29 19:51:18.808748 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.41s 2025-08-29 19:51:18.808756 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.75s 2025-08-29 19:51:18.808763 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 7.01s 2025-08-29 19:51:21.841192 | orchestrator | 2025-08-29 19:51:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 19:51:24.883739 | orchestrator | 2025-08-29 19:51:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 19:51:27.932432 | orchestrator | 2025-08-29 19:51:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 19:51:30.977919 | orchestrator | 2025-08-29 19:51:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 19:51:34.013419 | orchestrator | 2025-08-29 19:51:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 19:51:37.055090 | orchestrator | 2025-08-29 19:51:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 19:51:40.091161 | orchestrator | 2025-08-29 19:51:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 19:51:43.134093 | orchestrator | 2025-08-29 19:51:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 19:51:46.176130 | orchestrator | 2025-08-29 19:51:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 19:51:49.222776 | orchestrator | 2025-08-29 19:51:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 19:51:52.263607 | orchestrator | 2025-08-29 19:51:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 19:51:55.301991 | orchestrator | 2025-08-29 19:51:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 19:51:58.341428 | orchestrator | 2025-08-29 19:51:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 19:52:01.379465 | orchestrator | 2025-08-29 19:52:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 19:52:04.428473 | orchestrator | 2025-08-29 19:52:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 19:52:07.466505 | orchestrator | 2025-08-29 19:52:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 19:52:10.503100 | orchestrator | 2025-08-29 19:52:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 19:52:13.547464 | orchestrator | 2025-08-29 19:52:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 19:52:16.590804 | orchestrator | 2025-08-29 19:52:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 19:52:19.632387 | orchestrator | 2025-08-29 19:52:19.962640 | orchestrator | 2025-08-29 19:52:19.967712 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Fri Aug 29 19:52:19 UTC 2025 2025-08-29 19:52:19.967794 | orchestrator | 2025-08-29 19:52:20.467021 | orchestrator | ok: Runtime: 0:35:20.265352 2025-08-29 19:52:20.730346 | 2025-08-29 19:52:20.730549 | TASK [Bootstrap services] 2025-08-29 19:52:21.472596 | orchestrator | 2025-08-29 19:52:21.472966 | orchestrator | # BOOTSTRAP 2025-08-29 19:52:21.472998 | orchestrator | 2025-08-29 19:52:21.473014 | orchestrator | + set -e 2025-08-29 19:52:21.473027 | orchestrator | + echo 2025-08-29 19:52:21.473041 | orchestrator | + echo '# BOOTSTRAP' 2025-08-29 19:52:21.473060 | orchestrator | + echo 2025-08-29 19:52:21.473105 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-08-29 19:52:21.481609 | orchestrator | + set -e 2025-08-29 19:52:21.481759 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-08-29 19:52:26.137453 | orchestrator | 2025-08-29 19:52:26 | INFO  | It takes a moment until task 3b6b9cdb-21e2-4e95-a172-c75661893bfe (flavor-manager) has been started and output is visible here. 2025-08-29 19:52:29.932161 | orchestrator | ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ 2025-08-29 19:52:29.932359 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_flavor_manager/main.py:179 │ 2025-08-29 19:52:29.932404 | orchestrator | │ in run │ 2025-08-29 19:52:29.932425 | orchestrator | │ │ 2025-08-29 19:52:29.932444 | orchestrator | │ 176 │ logger.add(sys.stderr, format=log_fmt, level=level, colorize=True) │ 2025-08-29 19:52:29.932482 | orchestrator | │ 177 │ │ 2025-08-29 19:52:29.932503 | orchestrator | │ 178 │ definitions = get_flavor_definitions(name, url) │ 2025-08-29 19:52:29.932526 | orchestrator | │ ❱ 179 │ manager = FlavorManager( │ 2025-08-29 19:52:29.932545 | orchestrator | │ 180 │ │ cloud=Cloud(cloud), definitions=definitions, recommended=recom │ 2025-08-29 19:52:29.932563 | orchestrator | │ 181 │ ) │ 2025-08-29 19:52:29.932582 | orchestrator | │ 182 │ manager.run() │ 2025-08-29 19:52:29.932601 | orchestrator | │ │ 2025-08-29 19:52:29.932622 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-08-29 19:52:29.932659 | orchestrator | │ │ cloud = 'admin' │ │ 2025-08-29 19:52:29.932702 | orchestrator | │ │ debug = False │ │ 2025-08-29 19:52:29.932723 | orchestrator | │ │ definitions = { │ │ 2025-08-29 19:52:29.932740 | orchestrator | │ │ │ 'reference': [ │ │ 2025-08-29 19:52:29.932757 | orchestrator | │ │ │ │ {'field': 'name', 'mandatory_prefix': 'SCS-'}, │ │ 2025-08-29 19:52:29.932774 | orchestrator | │ │ │ │ {'field': 'cpus'}, │ │ 2025-08-29 19:52:29.932790 | orchestrator | │ │ │ │ {'field': 'ram'}, │ │ 2025-08-29 19:52:29.932807 | orchestrator | │ │ │ │ {'field': 'disk'}, │ │ 2025-08-29 19:52:29.932824 | orchestrator | │ │ │ │ {'field': 'public', 'default': True}, │ │ 2025-08-29 19:52:29.932843 | orchestrator | │ │ │ │ {'field': 'disabled', 'default': False} │ │ 2025-08-29 19:52:29.932861 | orchestrator | │ │ │ ], │ │ 2025-08-29 19:52:29.932877 | orchestrator | │ │ │ 'mandatory': [ │ │ 2025-08-29 19:52:29.932894 | orchestrator | │ │ │ │ { │ │ 2025-08-29 19:52:29.932911 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1', │ │ 2025-08-29 19:52:29.932964 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 19:52:29.932985 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-08-29 19:52:29.933003 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-08-29 19:52:29.933021 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-08-29 19:52:29.933038 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 19:52:29.933056 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:1', │ │ 2025-08-29 19:52:29.933074 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-1', │ │ 2025-08-29 19:52:29.933091 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 19:52:29.933109 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 19:52:29.933128 | orchestrator | │ │ │ │ { │ │ 2025-08-29 19:52:29.933145 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1-5', │ │ 2025-08-29 19:52:29.933161 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 19:52:29.933178 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-08-29 19:52:29.933195 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-08-29 19:52:29.933211 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-08-29 19:52:29.933288 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 19:52:29.933310 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:5', │ │ 2025-08-29 19:52:29.933328 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-5', │ │ 2025-08-29 19:52:29.933346 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 19:52:29.933364 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 19:52:29.933382 | orchestrator | │ │ │ │ { │ │ 2025-08-29 19:52:29.933401 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2', │ │ 2025-08-29 19:52:29.933431 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 19:52:29.933450 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-08-29 19:52:29.933467 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-08-29 19:52:29.933484 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 19:52:29.933506 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 19:52:29.933524 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2', │ │ 2025-08-29 19:52:29.933542 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2', │ │ 2025-08-29 19:52:29.933560 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 19:52:29.933578 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 19:52:29.933595 | orchestrator | │ │ │ │ { │ │ 2025-08-29 19:52:29.933613 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2-5', │ │ 2025-08-29 19:52:29.933631 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 19:52:29.933795 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-08-29 19:52:29.933820 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-08-29 19:52:29.933838 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 19:52:29.933856 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 19:52:29.933871 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2:5', │ │ 2025-08-29 19:52:29.933886 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2-5', │ │ 2025-08-29 19:52:29.933896 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 19:52:29.933906 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 19:52:29.933915 | orchestrator | │ │ │ │ { │ │ 2025-08-29 19:52:29.933924 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4', │ │ 2025-08-29 19:52:29.933934 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 19:52:29.933943 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-08-29 19:52:29.933953 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-08-29 19:52:29.933962 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 19:52:29.933972 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 19:52:29.933981 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4', │ │ 2025-08-29 19:52:29.933991 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4', │ │ 2025-08-29 19:52:29.934000 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 19:52:29.934009 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 19:52:29.934079 | orchestrator | │ │ │ │ { │ │ 2025-08-29 19:52:29.934104 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4-10', │ │ 2025-08-29 19:52:29.934119 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 19:52:29.934167 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-08-29 19:52:29.934186 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-08-29 19:52:29.934219 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 19:52:29.957199 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 19:52:29.957288 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4:10', │ │ 2025-08-29 19:52:29.957300 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4-10', │ │ 2025-08-29 19:52:29.957311 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 19:52:29.957322 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 19:52:29.957332 | orchestrator | │ │ │ │ { │ │ 2025-08-29 19:52:29.957343 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8', │ │ 2025-08-29 19:52:29.957354 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 19:52:29.957386 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-08-29 19:52:29.957399 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-08-29 19:52:29.957410 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 19:52:29.957420 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 19:52:29.957431 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8', │ │ 2025-08-29 19:52:29.957442 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8', │ │ 2025-08-29 19:52:29.957453 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 19:52:29.957463 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 19:52:29.957474 | orchestrator | │ │ │ │ { │ │ 2025-08-29 19:52:29.957485 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8-20', │ │ 2025-08-29 19:52:29.957496 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 19:52:29.957506 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-08-29 19:52:29.957517 | orchestrator | │ │ │ │ │ 'disk': 20, │ │ 2025-08-29 19:52:29.957528 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 19:52:29.957553 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 19:52:29.957575 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8:20', │ │ 2025-08-29 19:52:29.957586 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8-20', │ │ 2025-08-29 19:52:29.957597 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 19:52:29.957607 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 19:52:29.957618 | orchestrator | │ │ │ │ { │ │ 2025-08-29 19:52:29.957628 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4', │ │ 2025-08-29 19:52:29.957639 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-08-29 19:52:29.957653 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-08-29 19:52:29.957664 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-08-29 19:52:29.957675 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 19:52:29.957708 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 19:52:29.957719 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4', │ │ 2025-08-29 19:52:29.957730 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4', │ │ 2025-08-29 19:52:29.957740 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 19:52:29.957763 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 19:52:29.957774 | orchestrator | │ │ │ │ { │ │ 2025-08-29 19:52:29.957785 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4-10', │ │ 2025-08-29 19:52:29.957796 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-08-29 19:52:29.957813 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-08-29 19:52:29.957825 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-08-29 19:52:29.957852 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 19:52:29.957863 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 19:52:29.957874 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4:10', │ │ 2025-08-29 19:52:29.957885 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4-10', │ │ 2025-08-29 19:52:29.957896 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 19:52:29.957906 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 19:52:29.957917 | orchestrator | │ │ │ │ ... +19 │ │ 2025-08-29 19:52:29.957927 | orchestrator | │ │ │ ] │ │ 2025-08-29 19:52:29.957938 | orchestrator | │ │ } │ │ 2025-08-29 19:52:29.957949 | orchestrator | │ │ level = 'INFO' │ │ 2025-08-29 19:52:29.957960 | orchestrator | │ │ log_fmt = '{time:YYYY-MM-DD HH:mm:ss} | │ │ 2025-08-29 19:52:29.957971 | orchestrator | │ │ {level: <8} | '+17 │ │ 2025-08-29 19:52:29.957982 | orchestrator | │ │ name = 'local' │ │ 2025-08-29 19:52:29.957993 | orchestrator | │ │ recommended = True │ │ 2025-08-29 19:52:29.958003 | orchestrator | │ │ url = None │ │ 2025-08-29 19:52:29.958049 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-08-29 19:52:29.958094 | orchestrator | │ │ 2025-08-29 19:52:29.958106 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_flavor_manager/main.py:97 │ 2025-08-29 19:52:29.958117 | orchestrator | │ in __init__ │ 2025-08-29 19:52:29.958127 | orchestrator | │ │ 2025-08-29 19:52:29.958138 | orchestrator | │ 94 │ │ self.required_flavors = definitions["mandatory"] │ 2025-08-29 19:52:29.958149 | orchestrator | │ 95 │ │ self.cloud = cloud │ 2025-08-29 19:52:29.958159 | orchestrator | │ 96 │ │ if recommended: │ 2025-08-29 19:52:29.958170 | orchestrator | │ ❱ 97 │ │ │ self.required_flavors = self.required_flavors + definition │ 2025-08-29 19:52:29.958181 | orchestrator | │ 98 │ │ │ 2025-08-29 19:52:29.958191 | orchestrator | │ 99 │ │ self.defaults_dict = {} │ 2025-08-29 19:52:29.958202 | orchestrator | │ 100 │ │ for item in definitions["reference"]: │ 2025-08-29 19:52:29.958212 | orchestrator | │ │ 2025-08-29 19:52:29.958230 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-08-29 19:52:29.958244 | orchestrator | │ │ cloud = │ │ 2025-08-29 19:52:29.958273 | orchestrator | │ │ definitions = { │ │ 2025-08-29 19:52:29.958284 | orchestrator | │ │ │ 'reference': [ │ │ 2025-08-29 19:52:29.958294 | orchestrator | │ │ │ │ {'field': 'name', 'mandatory_prefix': 'SCS-'}, │ │ 2025-08-29 19:52:29.958305 | orchestrator | │ │ │ │ {'field': 'cpus'}, │ │ 2025-08-29 19:52:29.958316 | orchestrator | │ │ │ │ {'field': 'ram'}, │ │ 2025-08-29 19:52:29.958326 | orchestrator | │ │ │ │ {'field': 'disk'}, │ │ 2025-08-29 19:52:29.958337 | orchestrator | │ │ │ │ {'field': 'public', 'default': True}, │ │ 2025-08-29 19:52:29.958348 | orchestrator | │ │ │ │ {'field': 'disabled', 'default': False} │ │ 2025-08-29 19:52:29.958358 | orchestrator | │ │ │ ], │ │ 2025-08-29 19:52:29.958369 | orchestrator | │ │ │ 'mandatory': [ │ │ 2025-08-29 19:52:29.958379 | orchestrator | │ │ │ │ { │ │ 2025-08-29 19:52:29.958398 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1', │ │ 2025-08-29 19:52:29.976039 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 19:52:29.976123 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-08-29 19:52:29.976136 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-08-29 19:52:29.976147 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-08-29 19:52:29.976158 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 19:52:29.976168 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:1', │ │ 2025-08-29 19:52:29.976181 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-1', │ │ 2025-08-29 19:52:29.976192 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 19:52:29.976203 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 19:52:29.976214 | orchestrator | │ │ │ │ { │ │ 2025-08-29 19:52:29.976224 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1-5', │ │ 2025-08-29 19:52:29.976235 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 19:52:29.976246 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-08-29 19:52:29.976257 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-08-29 19:52:29.976268 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-08-29 19:52:29.976278 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 19:52:29.976289 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:5', │ │ 2025-08-29 19:52:29.976300 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-5', │ │ 2025-08-29 19:52:29.976310 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 19:52:29.976321 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 19:52:29.976352 | orchestrator | │ │ │ │ { │ │ 2025-08-29 19:52:29.976363 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2', │ │ 2025-08-29 19:52:29.976374 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 19:52:29.976385 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-08-29 19:52:29.976396 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-08-29 19:52:29.976406 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 19:52:29.976417 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 19:52:29.976428 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2', │ │ 2025-08-29 19:52:29.976439 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2', │ │ 2025-08-29 19:52:29.976449 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 19:52:29.976508 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 19:52:29.976521 | orchestrator | │ │ │ │ { │ │ 2025-08-29 19:52:29.976532 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2-5', │ │ 2025-08-29 19:52:29.976543 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 19:52:29.976554 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-08-29 19:52:29.976565 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-08-29 19:52:29.976576 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 19:52:29.976586 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 19:52:29.976597 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2:5', │ │ 2025-08-29 19:52:29.976608 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2-5', │ │ 2025-08-29 19:52:29.976619 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 19:52:29.976630 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 19:52:29.976641 | orchestrator | │ │ │ │ { │ │ 2025-08-29 19:52:29.976667 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4', │ │ 2025-08-29 19:52:29.976754 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 19:52:29.976770 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-08-29 19:52:29.976781 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-08-29 19:52:29.976791 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 19:52:29.976802 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 19:52:29.976813 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4', │ │ 2025-08-29 19:52:29.976823 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4', │ │ 2025-08-29 19:52:29.976834 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 19:52:29.976845 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 19:52:29.976864 | orchestrator | │ │ │ │ { │ │ 2025-08-29 19:52:29.976875 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4-10', │ │ 2025-08-29 19:52:29.976886 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 19:52:29.976897 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-08-29 19:52:29.976908 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-08-29 19:52:29.976918 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 19:52:29.976929 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 19:52:29.976940 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4:10', │ │ 2025-08-29 19:52:29.976950 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4-10', │ │ 2025-08-29 19:52:29.976961 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 19:52:29.976972 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 19:52:29.976982 | orchestrator | │ │ │ │ { │ │ 2025-08-29 19:52:29.976993 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8', │ │ 2025-08-29 19:52:29.977004 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 19:52:29.977015 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-08-29 19:52:29.977026 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-08-29 19:52:29.977036 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 19:52:29.977047 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 19:52:29.977061 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8', │ │ 2025-08-29 19:52:29.977079 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8', │ │ 2025-08-29 19:52:29.977099 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 19:52:29.977119 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 19:52:29.977138 | orchestrator | │ │ │ │ { │ │ 2025-08-29 19:52:29.977160 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8-20', │ │ 2025-08-29 19:52:29.977180 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 19:52:29.977200 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-08-29 19:52:29.977216 | orchestrator | │ │ │ │ │ 'disk': 20, │ │ 2025-08-29 19:52:29.977227 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 19:52:29.977238 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 19:52:29.977249 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8:20', │ │ 2025-08-29 19:52:29.977259 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8-20', │ │ 2025-08-29 19:52:29.977270 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 19:52:29.977281 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 19:52:29.977308 | orchestrator | │ │ │ │ { │ │ 2025-08-29 19:52:30.040987 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4', │ │ 2025-08-29 19:52:30.041071 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-08-29 19:52:30.041078 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-08-29 19:52:30.041083 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-08-29 19:52:30.041089 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 19:52:30.041095 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 19:52:30.041100 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4', │ │ 2025-08-29 19:52:30.041105 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4', │ │ 2025-08-29 19:52:30.041110 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 19:52:30.041115 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 19:52:30.041120 | orchestrator | │ │ │ │ { │ │ 2025-08-29 19:52:30.041126 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4-10', │ │ 2025-08-29 19:52:30.041131 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-08-29 19:52:30.041136 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-08-29 19:52:30.041141 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-08-29 19:52:30.041146 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 19:52:30.041151 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 19:52:30.041156 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4:10', │ │ 2025-08-29 19:52:30.041161 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4-10', │ │ 2025-08-29 19:52:30.041166 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 19:52:30.041171 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 19:52:30.041176 | orchestrator | │ │ │ │ ... +19 │ │ 2025-08-29 19:52:30.041182 | orchestrator | │ │ │ ] │ │ 2025-08-29 19:52:30.041187 | orchestrator | │ │ } │ │ 2025-08-29 19:52:30.041192 | orchestrator | │ │ recommended = True │ │ 2025-08-29 19:52:30.041198 | orchestrator | │ │ self = │ │ 2025-08-29 19:52:30.041209 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-08-29 19:52:30.041218 | orchestrator | ╰──────────────────────────────────────────────────────────────────────────────╯ 2025-08-29 19:52:30.041224 | orchestrator | KeyError: 'recommended' 2025-08-29 19:52:30.777649 | orchestrator | ERROR 2025-08-29 19:52:30.778191 | orchestrator | { 2025-08-29 19:52:30.778415 | orchestrator | "delta": "0:00:09.245637", 2025-08-29 19:52:30.778532 | orchestrator | "end": "2025-08-29 19:52:30.341730", 2025-08-29 19:52:30.778594 | orchestrator | "msg": "non-zero return code", 2025-08-29 19:52:30.778647 | orchestrator | "rc": 1, 2025-08-29 19:52:30.778700 | orchestrator | "start": "2025-08-29 19:52:21.096093" 2025-08-29 19:52:30.778750 | orchestrator | } failure 2025-08-29 19:52:30.801816 | 2025-08-29 19:52:30.802038 | PLAY RECAP 2025-08-29 19:52:30.802185 | orchestrator | ok: 22 changed: 9 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2025-08-29 19:52:30.802272 | 2025-08-29 19:52:31.040929 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-08-29 19:52:31.043218 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-08-29 19:52:31.787618 | 2025-08-29 19:52:31.787900 | PLAY [Post output play] 2025-08-29 19:52:31.804288 | 2025-08-29 19:52:31.804479 | LOOP [stage-output : Register sources] 2025-08-29 19:52:31.883646 | 2025-08-29 19:52:31.883972 | TASK [stage-output : Check sudo] 2025-08-29 19:52:32.728134 | orchestrator | sudo: a password is required 2025-08-29 19:52:32.923287 | orchestrator | ok: Runtime: 0:00:00.012881 2025-08-29 19:52:32.936881 | 2025-08-29 19:52:32.937033 | LOOP [stage-output : Set source and destination for files and folders] 2025-08-29 19:52:32.991247 | 2025-08-29 19:52:32.991644 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-08-29 19:52:33.070582 | orchestrator | ok 2025-08-29 19:52:33.080311 | 2025-08-29 19:52:33.080521 | LOOP [stage-output : Ensure target folders exist] 2025-08-29 19:52:33.535788 | orchestrator | ok: "docs" 2025-08-29 19:52:33.536173 | 2025-08-29 19:52:33.813284 | orchestrator | ok: "artifacts" 2025-08-29 19:52:34.099247 | orchestrator | ok: "logs" 2025-08-29 19:52:34.121683 | 2025-08-29 19:52:34.121910 | LOOP [stage-output : Copy files and folders to staging folder] 2025-08-29 19:52:34.159930 | 2025-08-29 19:52:34.160224 | TASK [stage-output : Make all log files readable] 2025-08-29 19:52:34.473306 | orchestrator | ok 2025-08-29 19:52:34.483713 | 2025-08-29 19:52:34.483863 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-08-29 19:52:34.519405 | orchestrator | skipping: Conditional result was False 2025-08-29 19:52:34.535351 | 2025-08-29 19:52:34.535555 | TASK [stage-output : Discover log files for compression] 2025-08-29 19:52:34.560040 | orchestrator | skipping: Conditional result was False 2025-08-29 19:52:34.575429 | 2025-08-29 19:52:34.575612 | LOOP [stage-output : Archive everything from logs] 2025-08-29 19:52:34.619107 | 2025-08-29 19:52:34.619268 | PLAY [Post cleanup play] 2025-08-29 19:52:34.627685 | 2025-08-29 19:52:34.627799 | TASK [Set cloud fact (Zuul deployment)] 2025-08-29 19:52:34.695671 | orchestrator | ok 2025-08-29 19:52:34.709307 | 2025-08-29 19:52:34.709465 | TASK [Set cloud fact (local deployment)] 2025-08-29 19:52:34.743715 | orchestrator | skipping: Conditional result was False 2025-08-29 19:52:34.758953 | 2025-08-29 19:52:34.759094 | TASK [Clean the cloud environment] 2025-08-29 19:52:35.410003 | orchestrator | 2025-08-29 19:52:35 - clean up servers 2025-08-29 19:52:36.170077 | orchestrator | 2025-08-29 19:52:36 - testbed-manager 2025-08-29 19:52:36.256477 | orchestrator | 2025-08-29 19:52:36 - testbed-node-5 2025-08-29 19:52:36.348471 | orchestrator | 2025-08-29 19:52:36 - testbed-node-4 2025-08-29 19:52:36.442446 | orchestrator | 2025-08-29 19:52:36 - testbed-node-0 2025-08-29 19:52:36.532941 | orchestrator | 2025-08-29 19:52:36 - testbed-node-3 2025-08-29 19:52:36.637430 | orchestrator | 2025-08-29 19:52:36 - testbed-node-1 2025-08-29 19:52:36.732035 | orchestrator | 2025-08-29 19:52:36 - testbed-node-2 2025-08-29 19:52:36.821151 | orchestrator | 2025-08-29 19:52:36 - clean up keypairs 2025-08-29 19:52:36.842793 | orchestrator | 2025-08-29 19:52:36 - testbed 2025-08-29 19:52:36.869931 | orchestrator | 2025-08-29 19:52:36 - wait for servers to be gone 2025-08-29 19:52:45.588372 | orchestrator | 2025-08-29 19:52:45 - clean up ports 2025-08-29 19:52:45.769104 | orchestrator | 2025-08-29 19:52:45 - 01434bd8-1102-4ed2-97f9-ed4ee1374f1f 2025-08-29 19:52:46.683176 | orchestrator | 2025-08-29 19:52:46 - 09c9e607-1981-4f82-8bb0-c1e720cd882f 2025-08-29 19:52:46.883333 | orchestrator | 2025-08-29 19:52:46 - 09fe87a7-e59c-4b58-873c-8abc0efc48b0 2025-08-29 19:52:47.205541 | orchestrator | 2025-08-29 19:52:47 - 266eecc6-8b0e-46dd-82c9-c7b80c4c0adf 2025-08-29 19:52:47.503529 | orchestrator | 2025-08-29 19:52:47 - 3dbac66b-b9e0-484b-b8c2-01a3068d3285 2025-08-29 19:52:47.717020 | orchestrator | 2025-08-29 19:52:47 - 8cac7f8e-4529-4de8-af49-8618a19f1cf9 2025-08-29 19:52:47.932197 | orchestrator | 2025-08-29 19:52:47 - 92dcb106-04ea-4dd3-951e-a305cfa5e7ba 2025-08-29 19:52:48.178183 | orchestrator | 2025-08-29 19:52:48 - clean up volumes 2025-08-29 19:52:48.323541 | orchestrator | 2025-08-29 19:52:48 - testbed-volume-2-node-base 2025-08-29 19:52:48.365238 | orchestrator | 2025-08-29 19:52:48 - testbed-volume-0-node-base 2025-08-29 19:52:48.412503 | orchestrator | 2025-08-29 19:52:48 - testbed-volume-5-node-base 2025-08-29 19:52:48.452613 | orchestrator | 2025-08-29 19:52:48 - testbed-volume-1-node-base 2025-08-29 19:52:48.492826 | orchestrator | 2025-08-29 19:52:48 - testbed-volume-4-node-base 2025-08-29 19:52:48.534871 | orchestrator | 2025-08-29 19:52:48 - testbed-volume-3-node-base 2025-08-29 19:52:48.573515 | orchestrator | 2025-08-29 19:52:48 - testbed-volume-7-node-4 2025-08-29 19:52:48.618837 | orchestrator | 2025-08-29 19:52:48 - testbed-volume-manager-base 2025-08-29 19:52:48.660710 | orchestrator | 2025-08-29 19:52:48 - testbed-volume-0-node-3 2025-08-29 19:52:48.705465 | orchestrator | 2025-08-29 19:52:48 - testbed-volume-6-node-3 2025-08-29 19:52:48.747704 | orchestrator | 2025-08-29 19:52:48 - testbed-volume-4-node-4 2025-08-29 19:52:48.787816 | orchestrator | 2025-08-29 19:52:48 - testbed-volume-1-node-4 2025-08-29 19:52:48.832598 | orchestrator | 2025-08-29 19:52:48 - testbed-volume-5-node-5 2025-08-29 19:52:48.874471 | orchestrator | 2025-08-29 19:52:48 - testbed-volume-8-node-5 2025-08-29 19:52:48.919203 | orchestrator | 2025-08-29 19:52:48 - testbed-volume-2-node-5 2025-08-29 19:52:48.964496 | orchestrator | 2025-08-29 19:52:48 - testbed-volume-3-node-3 2025-08-29 19:52:49.009920 | orchestrator | 2025-08-29 19:52:49 - disconnect routers 2025-08-29 19:52:49.120384 | orchestrator | 2025-08-29 19:52:49 - testbed 2025-08-29 19:52:50.112623 | orchestrator | 2025-08-29 19:52:50 - clean up subnets 2025-08-29 19:52:50.162342 | orchestrator | 2025-08-29 19:52:50 - subnet-testbed-management 2025-08-29 19:52:50.316658 | orchestrator | 2025-08-29 19:52:50 - clean up networks 2025-08-29 19:52:50.451047 | orchestrator | 2025-08-29 19:52:50 - net-testbed-management 2025-08-29 19:52:50.727178 | orchestrator | 2025-08-29 19:52:50 - clean up security groups 2025-08-29 19:52:50.770308 | orchestrator | 2025-08-29 19:52:50 - testbed-management 2025-08-29 19:52:50.881905 | orchestrator | 2025-08-29 19:52:50 - testbed-node 2025-08-29 19:52:50.985311 | orchestrator | 2025-08-29 19:52:50 - clean up floating ips 2025-08-29 19:52:51.021022 | orchestrator | 2025-08-29 19:52:51 - 81.163.193.11 2025-08-29 19:52:51.390286 | orchestrator | 2025-08-29 19:52:51 - clean up routers 2025-08-29 19:52:51.498237 | orchestrator | 2025-08-29 19:52:51 - testbed 2025-08-29 19:52:52.320928 | orchestrator | ok: Runtime: 0:00:17.180572 2025-08-29 19:52:52.323504 | 2025-08-29 19:52:52.323633 | PLAY RECAP 2025-08-29 19:52:52.323713 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-08-29 19:52:52.323747 | 2025-08-29 19:52:52.463669 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-08-29 19:52:52.465922 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-08-29 19:52:53.228673 | 2025-08-29 19:52:53.228846 | PLAY [Cleanup play] 2025-08-29 19:52:53.245270 | 2025-08-29 19:52:53.245941 | TASK [Set cloud fact (Zuul deployment)] 2025-08-29 19:52:53.303670 | orchestrator | ok 2025-08-29 19:52:53.313601 | 2025-08-29 19:52:53.313759 | TASK [Set cloud fact (local deployment)] 2025-08-29 19:52:53.349367 | orchestrator | skipping: Conditional result was False 2025-08-29 19:52:53.368371 | 2025-08-29 19:52:53.368532 | TASK [Clean the cloud environment] 2025-08-29 19:52:54.450461 | orchestrator | 2025-08-29 19:52:54 - clean up servers 2025-08-29 19:52:54.939810 | orchestrator | 2025-08-29 19:52:54 - clean up keypairs 2025-08-29 19:52:54.957780 | orchestrator | 2025-08-29 19:52:54 - wait for servers to be gone 2025-08-29 19:52:54.999434 | orchestrator | 2025-08-29 19:52:54 - clean up ports 2025-08-29 19:52:55.070713 | orchestrator | 2025-08-29 19:52:55 - clean up volumes 2025-08-29 19:52:55.134591 | orchestrator | 2025-08-29 19:52:55 - disconnect routers 2025-08-29 19:52:55.159878 | orchestrator | 2025-08-29 19:52:55 - clean up subnets 2025-08-29 19:52:55.182152 | orchestrator | 2025-08-29 19:52:55 - clean up networks 2025-08-29 19:52:55.304486 | orchestrator | 2025-08-29 19:52:55 - clean up security groups 2025-08-29 19:52:55.338472 | orchestrator | 2025-08-29 19:52:55 - clean up floating ips 2025-08-29 19:52:55.365209 | orchestrator | 2025-08-29 19:52:55 - clean up routers 2025-08-29 19:52:55.908654 | orchestrator | ok: Runtime: 0:00:01.238459 2025-08-29 19:52:55.913366 | 2025-08-29 19:52:55.913568 | PLAY RECAP 2025-08-29 19:52:55.913688 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-08-29 19:52:55.913766 | 2025-08-29 19:52:56.053837 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-08-29 19:52:56.055127 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-08-29 19:52:56.801592 | 2025-08-29 19:52:56.801774 | PLAY [Base post-fetch] 2025-08-29 19:52:56.817291 | 2025-08-29 19:52:56.817425 | TASK [fetch-output : Set log path for multiple nodes] 2025-08-29 19:52:56.892985 | orchestrator | skipping: Conditional result was False 2025-08-29 19:52:56.903505 | 2025-08-29 19:52:56.903741 | TASK [fetch-output : Set log path for single node] 2025-08-29 19:52:56.955181 | orchestrator | ok 2025-08-29 19:52:56.964863 | 2025-08-29 19:52:56.965013 | LOOP [fetch-output : Ensure local output dirs] 2025-08-29 19:52:57.481212 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/3b59bc79e5d64b9988697df210f773f3/work/logs" 2025-08-29 19:52:57.744255 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/3b59bc79e5d64b9988697df210f773f3/work/artifacts" 2025-08-29 19:52:58.026409 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/3b59bc79e5d64b9988697df210f773f3/work/docs" 2025-08-29 19:52:58.041731 | 2025-08-29 19:52:58.041923 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-08-29 19:52:59.042421 | orchestrator | changed: .d..t...... ./ 2025-08-29 19:52:59.042733 | orchestrator | changed: All items complete 2025-08-29 19:52:59.042766 | 2025-08-29 19:52:59.805492 | orchestrator | changed: .d..t...... ./ 2025-08-29 19:53:00.537712 | orchestrator | changed: .d..t...... ./ 2025-08-29 19:53:00.561646 | 2025-08-29 19:53:00.561805 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-08-29 19:53:00.598599 | orchestrator | skipping: Conditional result was False 2025-08-29 19:53:00.601797 | orchestrator | skipping: Conditional result was False 2025-08-29 19:53:00.625227 | 2025-08-29 19:53:00.625383 | PLAY RECAP 2025-08-29 19:53:00.625512 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-08-29 19:53:00.625554 | 2025-08-29 19:53:00.759301 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-08-29 19:53:00.760373 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-08-29 19:53:01.521941 | 2025-08-29 19:53:01.522107 | PLAY [Base post] 2025-08-29 19:53:01.536859 | 2025-08-29 19:53:01.537000 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-08-29 19:53:02.530867 | orchestrator | changed 2025-08-29 19:53:02.541403 | 2025-08-29 19:53:02.541557 | PLAY RECAP 2025-08-29 19:53:02.541639 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-08-29 19:53:02.541720 | 2025-08-29 19:53:02.672808 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-08-29 19:53:02.675290 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-08-29 19:53:03.455043 | 2025-08-29 19:53:03.455233 | PLAY [Base post-logs] 2025-08-29 19:53:03.465757 | 2025-08-29 19:53:03.465891 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-08-29 19:53:03.912076 | localhost | changed 2025-08-29 19:53:03.928855 | 2025-08-29 19:53:03.929026 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-08-29 19:53:03.966571 | localhost | ok 2025-08-29 19:53:03.970132 | 2025-08-29 19:53:03.970238 | TASK [Set zuul-log-path fact] 2025-08-29 19:53:03.986787 | localhost | ok 2025-08-29 19:53:03.996035 | 2025-08-29 19:53:03.996150 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-08-29 19:53:04.033573 | localhost | ok 2025-08-29 19:53:04.039170 | 2025-08-29 19:53:04.039342 | TASK [upload-logs : Create log directories] 2025-08-29 19:53:04.548289 | localhost | changed 2025-08-29 19:53:04.551550 | 2025-08-29 19:53:04.551669 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-08-29 19:53:05.056327 | localhost -> localhost | ok: Runtime: 0:00:00.005059 2025-08-29 19:53:05.065664 | 2025-08-29 19:53:05.065837 | TASK [upload-logs : Upload logs to log server] 2025-08-29 19:53:05.652813 | localhost | Output suppressed because no_log was given 2025-08-29 19:53:05.655870 | 2025-08-29 19:53:05.656027 | LOOP [upload-logs : Compress console log and json output] 2025-08-29 19:53:05.715153 | localhost | skipping: Conditional result was False 2025-08-29 19:53:05.720353 | localhost | skipping: Conditional result was False 2025-08-29 19:53:05.728006 | 2025-08-29 19:53:05.728222 | LOOP [upload-logs : Upload compressed console log and json output] 2025-08-29 19:53:05.778525 | localhost | skipping: Conditional result was False 2025-08-29 19:53:05.779219 | 2025-08-29 19:53:05.782538 | localhost | skipping: Conditional result was False 2025-08-29 19:53:05.790310 | 2025-08-29 19:53:05.790547 | LOOP [upload-logs : Upload console log and json output]